Memshare: a Dynamic Multi-tenant Memory Key-value Cache
Web application performance is heavily reliant on the hit rate of memory-based caches. Current DRAM-based web caches statically partition their memory across multiple applications sharing the cache. This causes under utilization of memory which negatively impacts cache hit rates. We present Memshare...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Web application performance is heavily reliant on the hit rate of
memory-based caches. Current DRAM-based web caches statically partition their
memory across multiple applications sharing the cache. This causes under
utilization of memory which negatively impacts cache hit rates. We present
Memshare, a novel web memory cache that dynamically manages memory across
applications. Memshare provides a resource sharing model that guarantees
private memory to different applications while dynamically allocating the
remaining shared memory to optimize overall hit rate. Today's high cost of DRAM
storage and the availability of high performance CPU and memory bandwidth, make
web caches memory capacity bound. Memshare's log-structured design allows it to
provide significantly higher hit rates and dynamically partition memory among
applications at the expense of increased CPU and memory bandwidth consumption.
In addition, Memshare allows applications to use their own eviction policy for
their objects, independent of other applications. We implemented Memshare and
ran it on a week-long trace from a commercial memcached provider. We
demonstrate that Memshare increases the combined hit rate of the applications
in the trace by an 6.1% (from 84.7% hit rate to 90.8% hit rate) and reduces the
total number of misses by 39.7% without affecting system throughput or latency.
Even for single-tenant applications, Memshare increases the average hit rate of
the current state-of-the-art memory cache by an additional 2.7% on our
real-world trace. |
---|---|
DOI: | 10.48550/arxiv.1610.08129 |