CoPUP: content popularity and user preferences aware content caching framework in mobile edge computing
Mobile edge computing (MEC) enables intelligent content caching at the network edge to reduce traffic and enhance content delivery efficiency. In MEC architecture, popular content can be deployed at the MEC server to improve users’ quality of experience (QoE). Existing content caching techniques att...
Gespeichert in:
Veröffentlicht in: | Cluster computing 2023-02, Vol.26 (1), p.267-281 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Mobile edge computing (MEC) enables intelligent content caching at the network edge to reduce traffic and enhance content delivery efficiency. In MEC architecture, popular content can be deployed at the MEC server to improve users’ quality of experience (QoE). Existing content caching techniques attempt to improve cache hits but do not consider users’ preferences while estimating the popularity of content. Knowing users’ preferences is beneficial and essential for efficient content caching. This paper proposes Content Popularity and User Preferences aware content caching (CoPUP) in MEC. The proposed scheme uses content-based collaborative filtering first to analyze the user-content matrix and identify the relationships between different contents. The convolution neural network model (CNN) is used to predict users’ preferences. The CoPUP significantly improves cache performance, enhances cache hit ratio, and reduces response time. The simulation experiments are conducted based on the real dataset from Movielens. The proposed CoPUP technique is compared with three baseline techniques namely Least Frequently Used (LFU), Least Recently Used (LRU), First-In-First-Out (FIFO) and a state-of-the-art technique Mobility-Aware Proactive edge caching scheme based on federated learning (MPCF). The experimental results reveal that the proposed model achieves 2% higher cache hit ratio and faster response time compared with baseline and state-of-the-art techniques. |
---|---|
ISSN: | 1386-7857 1573-7543 |
DOI: | 10.1007/s10586-022-03624-0 |