Fundamental Limits of Stochastic Shared Caches Networks

The work establishes the exact performance limits of stochastic coded caching when users share a bounded number of cache states, and when the association between users and caches, is random. Under the premise that more balanced user-to-cache associations perform better than unbalanced ones, our work...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Malik, Adeel, Serbetci, Berksan, Parrinello, Emanuele, Elia, Petros
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The work establishes the exact performance limits of stochastic coded caching when users share a bounded number of cache states, and when the association between users and caches, is random. Under the premise that more balanced user-to-cache associations perform better than unbalanced ones, our work provides a statistical analysis of the average performance of such networks, identifying in closed form, the exact optimal average delivery time. To insightfully capture this delay, we derive easy to compute closed-form analytical bounds that prove tight in the limit of a large number $\Lambda$ of cache states. In the scenario where delivery involves $K$ users, we conclude that the multiplicative performance deterioration due to randomness -- as compared to the well-known deterministic uniform case -- can be unbounded and can scale as $\Theta\left( \frac{\log \Lambda}{\log \log \Lambda} \right)$ at $K=\Theta\left(\Lambda\right)$, and that this scaling vanishes when $K=\Omega\left(\Lambda\log \Lambda\right)$. To alleviate this adverse effect of cache-load imbalance, we consider various load balancing methods, and show that employing proximity-bounded load balancing with an ability to choose from $h$ neighboring caches, the aforementioned scaling reduces to $\Theta \left(\frac{\log(\Lambda / h)}{ \log \log(\Lambda / h)} \right)$, while when the proximity constraint is removed, the scaling is of a much slower order $\Theta \left( \log \log \Lambda \right)$. The above analysis is extensively validated numerically.
DOI:10.48550/arxiv.2005.13847