User plane acceleration service for next-generation cellular networks
Reducing end-to-end latency is a key requirement for efficient and reliable new services offered by next-generation mobile networks. In this context, it is critical for mobile network operators (MNOs) to enable faster communications over backhaul transport networks between next-generation base stati...
Gespeichert in:
Veröffentlicht in: | Telecommunication systems 2023-12, Vol.84 (4), p.469-485 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reducing end-to-end latency is a key requirement for efficient and reliable new services offered by next-generation mobile networks. In this context, it is critical for mobile network operators (MNOs) to enable faster communications over backhaul transport networks between next-generation base stations and core networks. However, MNOs will need to make new investments and optimize many points of their current transport infrastructure to serve next-generation services well. In addition, even if MNOs make these investments, there may always be faults and performance degradation in transport networks. This paper presents a new approach to reduce the dependence of MNOs services on the quality of transport networks and rely on software updates on radio access network and core network components. A hyper text transfer protocol (HTTP)-based user plane that can be cached and accelerated is proposed, making it an ideal solution to combat transport problems in next-generation mobile networks. Numerical tests validate our proposed approach and underscore the significant improvements in transfer time, throughput, and overall performance achieved by leveraging HTTP caching and acceleration techniques. More specifically, GPRS tunneling protocol-user plane (GTP-U) is, on average, 35% slower than HTTP, with the performance difference increasing as the data size grows, primarily due to additional overhead and GTP-U encapsulation time. Additionally, HTTP caching with a size of 20 MB provides a 9.5% acceleration in data transfer time, with an average increase of approximately 9% when the data size exceeds 20 MB. |
---|---|
ISSN: | 1018-4864 1572-9451 |
DOI: | 10.1007/s11235-023-01058-6 |