Cooperative caching with return on investment

Large scale consolidation of distributed systems introduces data sharing between consumers which are not centrally managed, but may be physically adjacent. For example, shared global data sets can be jointly used by different services of the same organization, possibly running on different virtual m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yadgar, Gala, Factor, Michael, Schuster, Assaf
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 13
container_issue
container_start_page 1
container_title
container_volume
creator Yadgar, Gala
Factor, Michael
Schuster, Assaf
description Large scale consolidation of distributed systems introduces data sharing between consumers which are not centrally managed, but may be physically adjacent. For example, shared global data sets can be jointly used by different services of the same organization, possibly running on different virtual machines in the same data center. Similarly, neighboring CDNs provide fast access to the same content from the Internet. Cooperative caching, in which data are fetched from a neighboring cache instead of from the disk or from the Internet, can significantly improve resource utilization and performance in such scenarios. However, existing cooperative caching approaches fail to address the selfish nature of cache owners and their conflicting objectives. This calls for a new storage model that explicitly considers the cost of cooperation, and provides a framework for calculating the utility each owner derives from its cache and from cooperating with others. We define such a model, and construct four representative cooperation approaches to demonstrate how (and when) cooperative caching can be successfully employed in such large scale systems. We present principal guidelines for cooperative caching derived from our experimental analysis. We show that choosing the best cooperative approach can decrease the system's I/O delay by as much as 87%, while imposing cooperation when unwarranted might increase it by as much as 92%.
doi_str_mv 10.1109/MSST.2013.6558446
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6558446</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6558446</ieee_id><sourcerecordid>6558446</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-47bfe177c0e5babefb654a599cc6875071400c6ab91196240d945d5e3ab7c1a83</originalsourceid><addsrcrecordid>eNo9j8tKw0AUhscbWGseQNzkBRLPSebMZSnBG1RctIK7MjM9sSM2KclY8e0VrK6-xQcf_y_EBUKJCPbqcT5flBVgXSoiI6U6EJnVBqW2Fio0-lBMKlRQoFXmSJz9CW2P_wW9nIpsHN8AAKEmlHYiiqbvtzy4FHecBxfWsXvNP2Na5wOnj6HL-y6P3Y7HtOEunYuT1r2PnO05Fc-3N4vmvpg93T0017MioqZUSO1bRq0DMHnnufWKpCNrQ1BGE2iUAEE5b_FnbiVhZSWtiGvndUBn6qm4_O1GZl5uh7hxw9dy_7z-BghkR7Q</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Cooperative caching with return on investment</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Yadgar, Gala ; Factor, Michael ; Schuster, Assaf</creator><creatorcontrib>Yadgar, Gala ; Factor, Michael ; Schuster, Assaf</creatorcontrib><description>Large scale consolidation of distributed systems introduces data sharing between consumers which are not centrally managed, but may be physically adjacent. For example, shared global data sets can be jointly used by different services of the same organization, possibly running on different virtual machines in the same data center. Similarly, neighboring CDNs provide fast access to the same content from the Internet. Cooperative caching, in which data are fetched from a neighboring cache instead of from the disk or from the Internet, can significantly improve resource utilization and performance in such scenarios. However, existing cooperative caching approaches fail to address the selfish nature of cache owners and their conflicting objectives. This calls for a new storage model that explicitly considers the cost of cooperation, and provides a framework for calculating the utility each owner derives from its cache and from cooperating with others. We define such a model, and construct four representative cooperation approaches to demonstrate how (and when) cooperative caching can be successfully employed in such large scale systems. We present principal guidelines for cooperative caching derived from our experimental analysis. We show that choosing the best cooperative approach can decrease the system's I/O delay by as much as 87%, while imposing cooperation when unwarranted might increase it by as much as 92%.</description><identifier>ISSN: 2160-195X</identifier><identifier>ISBN: 1479902179</identifier><identifier>ISBN: 9781479902170</identifier><identifier>EISSN: 2160-1968</identifier><identifier>EISBN: 9781479902187</identifier><identifier>EISBN: 1479902187</identifier><identifier>DOI: 10.1109/MSST.2013.6558446</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cooperative caching ; Delays ; Internet ; Linear programming ; Protocols ; Servers ; Time factors</subject><ispartof>2013 IEEE 29th Symposium on Mass Storage Systems and Technologies (MSST), 2013, p.1-13</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6558446$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>310,311,781,785,790,791,2059,27929,54924</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6558446$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yadgar, Gala</creatorcontrib><creatorcontrib>Factor, Michael</creatorcontrib><creatorcontrib>Schuster, Assaf</creatorcontrib><title>Cooperative caching with return on investment</title><title>2013 IEEE 29th Symposium on Mass Storage Systems and Technologies (MSST)</title><addtitle>MSST</addtitle><description>Large scale consolidation of distributed systems introduces data sharing between consumers which are not centrally managed, but may be physically adjacent. For example, shared global data sets can be jointly used by different services of the same organization, possibly running on different virtual machines in the same data center. Similarly, neighboring CDNs provide fast access to the same content from the Internet. Cooperative caching, in which data are fetched from a neighboring cache instead of from the disk or from the Internet, can significantly improve resource utilization and performance in such scenarios. However, existing cooperative caching approaches fail to address the selfish nature of cache owners and their conflicting objectives. This calls for a new storage model that explicitly considers the cost of cooperation, and provides a framework for calculating the utility each owner derives from its cache and from cooperating with others. We define such a model, and construct four representative cooperation approaches to demonstrate how (and when) cooperative caching can be successfully employed in such large scale systems. We present principal guidelines for cooperative caching derived from our experimental analysis. We show that choosing the best cooperative approach can decrease the system's I/O delay by as much as 87%, while imposing cooperation when unwarranted might increase it by as much as 92%.</description><subject>Cooperative caching</subject><subject>Delays</subject><subject>Internet</subject><subject>Linear programming</subject><subject>Protocols</subject><subject>Servers</subject><subject>Time factors</subject><issn>2160-195X</issn><issn>2160-1968</issn><isbn>1479902179</isbn><isbn>9781479902170</isbn><isbn>9781479902187</isbn><isbn>1479902187</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2013</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNo9j8tKw0AUhscbWGseQNzkBRLPSebMZSnBG1RctIK7MjM9sSM2KclY8e0VrK6-xQcf_y_EBUKJCPbqcT5flBVgXSoiI6U6EJnVBqW2Fio0-lBMKlRQoFXmSJz9CW2P_wW9nIpsHN8AAKEmlHYiiqbvtzy4FHecBxfWsXvNP2Na5wOnj6HL-y6P3Y7HtOEunYuT1r2PnO05Fc-3N4vmvpg93T0017MioqZUSO1bRq0DMHnnufWKpCNrQ1BGE2iUAEE5b_FnbiVhZSWtiGvndUBn6qm4_O1GZl5uh7hxw9dy_7z-BghkR7Q</recordid><startdate>201305</startdate><enddate>201305</enddate><creator>Yadgar, Gala</creator><creator>Factor, Michael</creator><creator>Schuster, Assaf</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201305</creationdate><title>Cooperative caching with return on investment</title><author>Yadgar, Gala ; Factor, Michael ; Schuster, Assaf</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-47bfe177c0e5babefb654a599cc6875071400c6ab91196240d945d5e3ab7c1a83</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Cooperative caching</topic><topic>Delays</topic><topic>Internet</topic><topic>Linear programming</topic><topic>Protocols</topic><topic>Servers</topic><topic>Time factors</topic><toplevel>online_resources</toplevel><creatorcontrib>Yadgar, Gala</creatorcontrib><creatorcontrib>Factor, Michael</creatorcontrib><creatorcontrib>Schuster, Assaf</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yadgar, Gala</au><au>Factor, Michael</au><au>Schuster, Assaf</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Cooperative caching with return on investment</atitle><btitle>2013 IEEE 29th Symposium on Mass Storage Systems and Technologies (MSST)</btitle><stitle>MSST</stitle><date>2013-05</date><risdate>2013</risdate><spage>1</spage><epage>13</epage><pages>1-13</pages><issn>2160-195X</issn><eissn>2160-1968</eissn><isbn>1479902179</isbn><isbn>9781479902170</isbn><eisbn>9781479902187</eisbn><eisbn>1479902187</eisbn><abstract>Large scale consolidation of distributed systems introduces data sharing between consumers which are not centrally managed, but may be physically adjacent. For example, shared global data sets can be jointly used by different services of the same organization, possibly running on different virtual machines in the same data center. Similarly, neighboring CDNs provide fast access to the same content from the Internet. Cooperative caching, in which data are fetched from a neighboring cache instead of from the disk or from the Internet, can significantly improve resource utilization and performance in such scenarios. However, existing cooperative caching approaches fail to address the selfish nature of cache owners and their conflicting objectives. This calls for a new storage model that explicitly considers the cost of cooperation, and provides a framework for calculating the utility each owner derives from its cache and from cooperating with others. We define such a model, and construct four representative cooperation approaches to demonstrate how (and when) cooperative caching can be successfully employed in such large scale systems. We present principal guidelines for cooperative caching derived from our experimental analysis. We show that choosing the best cooperative approach can decrease the system's I/O delay by as much as 87%, while imposing cooperation when unwarranted might increase it by as much as 92%.</abstract><pub>IEEE</pub><doi>10.1109/MSST.2013.6558446</doi><tpages>13</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2160-195X
ispartof 2013 IEEE 29th Symposium on Mass Storage Systems and Technologies (MSST), 2013, p.1-13
issn 2160-195X
2160-1968
language eng
recordid cdi_ieee_primary_6558446
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Cooperative caching
Delays
Internet
Linear programming
Protocols
Servers
Time factors
title Cooperative caching with return on investment
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T05%3A26%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Cooperative%20caching%20with%20return%20on%20investment&rft.btitle=2013%20IEEE%2029th%20Symposium%20on%20Mass%20Storage%20Systems%20and%20Technologies%20(MSST)&rft.au=Yadgar,%20Gala&rft.date=2013-05&rft.spage=1&rft.epage=13&rft.pages=1-13&rft.issn=2160-195X&rft.eissn=2160-1968&rft.isbn=1479902179&rft.isbn_list=9781479902170&rft_id=info:doi/10.1109/MSST.2013.6558446&rft_dat=%3Cieee_6IE%3E6558446%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781479902187&rft.eisbn_list=1479902187&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6558446&rfr_iscdi=true