Caching With Finite Buffer and Request Delay Information: A Markov Decision Process Approach

Edge caching has become a promising technology in future wireless networks owing to its remarkable ability to reduce peak data traffic. However, the storage resource can be limited in practice hence only a small amount of files can be cached. How to improve the cache hit ratio in finite-buffer cachi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on wireless communications 2020-08, Vol.19 (8), p.5148-5161
Hauptverfasser: Hui, Haiming, Chen, Wei, Wang, Li
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Edge caching has become a promising technology in future wireless networks owing to its remarkable ability to reduce peak data traffic. However, the storage resource can be limited in practice hence only a small amount of files can be cached. How to improve the cache hit ratio in finite-buffer caching based on the prediction of user demands has become an important problem. In this paper, we study caching policies with finite buffer by exploiting the prediction of a user's request time, referred to as request delay information (RDI). Based on RDI, we maximize the average cache hit ratio through a Markov decision process (MDP) approach. Specifically, we formulate an MDP problem and apply a modified value iteration algorithm to find an optimal caching policy. Moreover, we provide an upper bound and a lower bound for the cache hit ratio, as well as an analytical cache hit ratio with small buffers. To address the issue that the state space can be prohibitively large in practice, we present a low-complexity heuristic caching policy that is shown to be asymptotically optimal. Simulation results show that introducing RDI may bring significant cache hit ratio gain when the buffer size is limited.
ISSN:1536-1276
1558-2248
DOI:10.1109/TWC.2020.2989513