DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning

Traditional Recommender Systems (RS) use central servers to collect user data, compute user profiles and train global recommendation models. Central computation of RS models has great results in performance because the models are trained using all the available information and the full user profiles...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021, Vol.9, p.83340-83354
Hauptverfasser: Shi, Bichen, Tragos, Elias Z., Ozsoy, Makbule Gulcin, Dong, Ruihai, Hurley, Neil, Smyth, Barry, Lawlor, Aonghus
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 83354
container_issue
container_start_page 83340
container_title IEEE access
container_volume 9
creator Shi, Bichen
Tragos, Elias Z.
Ozsoy, Makbule Gulcin
Dong, Ruihai
Hurley, Neil
Smyth, Barry
Lawlor, Aonghus
description Traditional Recommender Systems (RS) use central servers to collect user data, compute user profiles and train global recommendation models. Central computation of RS models has great results in performance because the models are trained using all the available information and the full user profiles. However, centralised RS require users to share their whole interaction history with the central server and in general are not scalable as the number of users and interactions increases. Central RSs also have a central point of attack with respect to user privacy, because all user profiles and interactions are stored centrally. In this work we propose DARES, an distributed recommender system algorithm that uses reinforcement learning and is based on the asynchronous advantage actor-critic model (A3C). DARES is developed combining the approaches of A3C and federated learning (FL) and allows users to keep their data locally on their own devices. The system architecture consists of (i) a local recommendation model trained locally on the user devices using their interaction and (ii) a global recommendation model that is trained on a central server using the model updates that are computed on the user devices. We evaluate the proposed algorithm using well-known datasets and we compare its performance against well-known state of the art algorithms. We show that although being distributed and asynchronous, it can achieve comparable and in many cases better performance than current state-of-the-art algorithms.
doi_str_mv 10.1109/ACCESS.2021.3087406
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2541467625</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9448142</ieee_id><doaj_id>oai_doaj_org_article_2c91b62aeae84cf5a29d7ebb4ac6d23e</doaj_id><sourcerecordid>2541467625</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-227e569a5d8a59469b3f19b12e643aa9aed8135c45b4ab1b84b9d88e88033c4e3</originalsourceid><addsrcrecordid>eNpNUU1rAjEQXUoLldZf4GWhZ22-N-ltUfsBQkErPYYkO2tXdGOT9eC_b-xK6VxmePPem4GXZSOMJhgj9VhOp_PVakIQwROKZMGQuMoGBAs1ppyK63_zbTaMcYtSyQTxYpB9zsrlfPWUl21exlPrvoJv_THmsyZ2obHHDqp8Cc7v99BWEPLVKXawz9exaTf5DOCQtk1b--AgMbp8ASa0aXef3dRmF2F46XfZ-nn-MX0dL95f3qblYuwYkt2YkAK4UIZX0nDFhLK0xspiAoJRY5SBSmLKHeOWGYutZFZVUoKUiFLHgN5lb71v5c1WH0KzN-GkvWn0L-DDRpvQNW4HmjiFrSAGDEjmam6IqgqwydiJitCz10PvdQj--wix01t_DG16XxPOMBOFIDyxaM9ywccYoP67ipE-B6L7QPQ5EH0JJKlGvaoBgD-FYkxiRugPRbaGxQ</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2541467625</pqid></control><display><type>article</type><title>DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Shi, Bichen ; Tragos, Elias Z. ; Ozsoy, Makbule Gulcin ; Dong, Ruihai ; Hurley, Neil ; Smyth, Barry ; Lawlor, Aonghus</creator><creatorcontrib>Shi, Bichen ; Tragos, Elias Z. ; Ozsoy, Makbule Gulcin ; Dong, Ruihai ; Hurley, Neil ; Smyth, Barry ; Lawlor, Aonghus</creatorcontrib><description>Traditional Recommender Systems (RS) use central servers to collect user data, compute user profiles and train global recommendation models. Central computation of RS models has great results in performance because the models are trained using all the available information and the full user profiles. However, centralised RS require users to share their whole interaction history with the central server and in general are not scalable as the number of users and interactions increases. Central RSs also have a central point of attack with respect to user privacy, because all user profiles and interactions are stored centrally. In this work we propose DARES, an distributed recommender system algorithm that uses reinforcement learning and is based on the asynchronous advantage actor-critic model (A3C). DARES is developed combining the approaches of A3C and federated learning (FL) and allows users to keep their data locally on their own devices. The system architecture consists of (i) a local recommendation model trained locally on the user devices using their interaction and (ii) a global recommendation model that is trained on a central server using the model updates that are computed on the user devices. We evaluate the proposed algorithm using well-known datasets and we compare its performance against well-known state of the art algorithms. We show that although being distributed and asynchronous, it can achieve comparable and in many cases better performance than current state-of-the-art algorithms.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2021.3087406</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Biological system modeling ; click through ratio ; Computational modeling ; Computer architecture ; distributed learning ; Machine learning ; Privacy ; Recommender systems ; Reinforcement learning ; Servers ; Training ; User profiles</subject><ispartof>IEEE access, 2021, Vol.9, p.83340-83354</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-227e569a5d8a59469b3f19b12e643aa9aed8135c45b4ab1b84b9d88e88033c4e3</citedby><cites>FETCH-LOGICAL-c408t-227e569a5d8a59469b3f19b12e643aa9aed8135c45b4ab1b84b9d88e88033c4e3</cites><orcidid>0000-0001-9566-531X ; 0000-0001-6013-1668 ; 0000-0002-2509-1370 ; 0000-0002-6160-4639</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9448142$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,4010,27610,27900,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Shi, Bichen</creatorcontrib><creatorcontrib>Tragos, Elias Z.</creatorcontrib><creatorcontrib>Ozsoy, Makbule Gulcin</creatorcontrib><creatorcontrib>Dong, Ruihai</creatorcontrib><creatorcontrib>Hurley, Neil</creatorcontrib><creatorcontrib>Smyth, Barry</creatorcontrib><creatorcontrib>Lawlor, Aonghus</creatorcontrib><title>DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning</title><title>IEEE access</title><addtitle>Access</addtitle><description>Traditional Recommender Systems (RS) use central servers to collect user data, compute user profiles and train global recommendation models. Central computation of RS models has great results in performance because the models are trained using all the available information and the full user profiles. However, centralised RS require users to share their whole interaction history with the central server and in general are not scalable as the number of users and interactions increases. Central RSs also have a central point of attack with respect to user privacy, because all user profiles and interactions are stored centrally. In this work we propose DARES, an distributed recommender system algorithm that uses reinforcement learning and is based on the asynchronous advantage actor-critic model (A3C). DARES is developed combining the approaches of A3C and federated learning (FL) and allows users to keep their data locally on their own devices. The system architecture consists of (i) a local recommendation model trained locally on the user devices using their interaction and (ii) a global recommendation model that is trained on a central server using the model updates that are computed on the user devices. We evaluate the proposed algorithm using well-known datasets and we compare its performance against well-known state of the art algorithms. We show that although being distributed and asynchronous, it can achieve comparable and in many cases better performance than current state-of-the-art algorithms.</description><subject>Algorithms</subject><subject>Biological system modeling</subject><subject>click through ratio</subject><subject>Computational modeling</subject><subject>Computer architecture</subject><subject>distributed learning</subject><subject>Machine learning</subject><subject>Privacy</subject><subject>Recommender systems</subject><subject>Reinforcement learning</subject><subject>Servers</subject><subject>Training</subject><subject>User profiles</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1rAjEQXUoLldZf4GWhZ22-N-ltUfsBQkErPYYkO2tXdGOT9eC_b-xK6VxmePPem4GXZSOMJhgj9VhOp_PVakIQwROKZMGQuMoGBAs1ppyK63_zbTaMcYtSyQTxYpB9zsrlfPWUl21exlPrvoJv_THmsyZ2obHHDqp8Cc7v99BWEPLVKXawz9exaTf5DOCQtk1b--AgMbp8ASa0aXef3dRmF2F46XfZ-nn-MX0dL95f3qblYuwYkt2YkAK4UIZX0nDFhLK0xspiAoJRY5SBSmLKHeOWGYutZFZVUoKUiFLHgN5lb71v5c1WH0KzN-GkvWn0L-DDRpvQNW4HmjiFrSAGDEjmam6IqgqwydiJitCz10PvdQj--wix01t_DG16XxPOMBOFIDyxaM9ywccYoP67ipE-B6L7QPQ5EH0JJKlGvaoBgD-FYkxiRugPRbaGxQ</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Shi, Bichen</creator><creator>Tragos, Elias Z.</creator><creator>Ozsoy, Makbule Gulcin</creator><creator>Dong, Ruihai</creator><creator>Hurley, Neil</creator><creator>Smyth, Barry</creator><creator>Lawlor, Aonghus</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-9566-531X</orcidid><orcidid>https://orcid.org/0000-0001-6013-1668</orcidid><orcidid>https://orcid.org/0000-0002-2509-1370</orcidid><orcidid>https://orcid.org/0000-0002-6160-4639</orcidid></search><sort><creationdate>2021</creationdate><title>DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning</title><author>Shi, Bichen ; Tragos, Elias Z. ; Ozsoy, Makbule Gulcin ; Dong, Ruihai ; Hurley, Neil ; Smyth, Barry ; Lawlor, Aonghus</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-227e569a5d8a59469b3f19b12e643aa9aed8135c45b4ab1b84b9d88e88033c4e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Biological system modeling</topic><topic>click through ratio</topic><topic>Computational modeling</topic><topic>Computer architecture</topic><topic>distributed learning</topic><topic>Machine learning</topic><topic>Privacy</topic><topic>Recommender systems</topic><topic>Reinforcement learning</topic><topic>Servers</topic><topic>Training</topic><topic>User profiles</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shi, Bichen</creatorcontrib><creatorcontrib>Tragos, Elias Z.</creatorcontrib><creatorcontrib>Ozsoy, Makbule Gulcin</creatorcontrib><creatorcontrib>Dong, Ruihai</creatorcontrib><creatorcontrib>Hurley, Neil</creatorcontrib><creatorcontrib>Smyth, Barry</creatorcontrib><creatorcontrib>Lawlor, Aonghus</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shi, Bichen</au><au>Tragos, Elias Z.</au><au>Ozsoy, Makbule Gulcin</au><au>Dong, Ruihai</au><au>Hurley, Neil</au><au>Smyth, Barry</au><au>Lawlor, Aonghus</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2021</date><risdate>2021</risdate><volume>9</volume><spage>83340</spage><epage>83354</epage><pages>83340-83354</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Traditional Recommender Systems (RS) use central servers to collect user data, compute user profiles and train global recommendation models. Central computation of RS models has great results in performance because the models are trained using all the available information and the full user profiles. However, centralised RS require users to share their whole interaction history with the central server and in general are not scalable as the number of users and interactions increases. Central RSs also have a central point of attack with respect to user privacy, because all user profiles and interactions are stored centrally. In this work we propose DARES, an distributed recommender system algorithm that uses reinforcement learning and is based on the asynchronous advantage actor-critic model (A3C). DARES is developed combining the approaches of A3C and federated learning (FL) and allows users to keep their data locally on their own devices. The system architecture consists of (i) a local recommendation model trained locally on the user devices using their interaction and (ii) a global recommendation model that is trained on a central server using the model updates that are computed on the user devices. We evaluate the proposed algorithm using well-known datasets and we compare its performance against well-known state of the art algorithms. We show that although being distributed and asynchronous, it can achieve comparable and in many cases better performance than current state-of-the-art algorithms.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2021.3087406</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-9566-531X</orcidid><orcidid>https://orcid.org/0000-0001-6013-1668</orcidid><orcidid>https://orcid.org/0000-0002-2509-1370</orcidid><orcidid>https://orcid.org/0000-0002-6160-4639</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2021, Vol.9, p.83340-83354
issn 2169-3536
2169-3536
language eng
recordid cdi_proquest_journals_2541467625
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Algorithms
Biological system modeling
click through ratio
Computational modeling
Computer architecture
distributed learning
Machine learning
Privacy
Recommender systems
Reinforcement learning
Servers
Training
User profiles
title DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T13%3A14%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DARES:%20An%20Asynchronous%20Distributed%20Recommender%20System%20Using%20Deep%20Reinforcement%20Learning&rft.jtitle=IEEE%20access&rft.au=Shi,%20Bichen&rft.date=2021&rft.volume=9&rft.spage=83340&rft.epage=83354&rft.pages=83340-83354&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2021.3087406&rft_dat=%3Cproquest_ieee_%3E2541467625%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2541467625&rft_id=info:pmid/&rft_ieee_id=9448142&rft_doaj_id=oai_doaj_org_article_2c91b62aeae84cf5a29d7ebb4ac6d23e&rfr_iscdi=true