Container lifecycle‐aware scheduling for serverless computing
Elastic scaling in response to changes on demand is a main benefit of serverless computing. When bursty workloads arrive, a serverless platform launches many new containers and initializes function environments (known as cold starts), which incurs significant startup latency. To reduce cold starts,...
Gespeichert in:
Veröffentlicht in: | Software, practice & experience practice & experience, 2022-02, Vol.52 (2), p.337-352 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Elastic scaling in response to changes on demand is a main benefit of serverless computing. When bursty workloads arrive, a serverless platform launches many new containers and initializes function environments (known as cold starts), which incurs significant startup latency. To reduce cold starts, platforms usually pause a container after it serves a request, and reuse this container for subsequent requests. However, this reuse strategy cannot efficiently reduce cold starts because the schedulers are agnostic of container lifecycle. For example, it may ignore soon available containers or evict soon needed containers. We propose a container lifecycle‐aware scheduling strategy for serverless computing, CAS. The key idea is to control distribution of requests and determine creation or eviction of containers according to different lifecycle phases of containers. We implement a prototype of CAS on OpenWhisk. Our evaluation shows that CAS reduces 81% cold starts and therefore brings a 63% reduction at 95th percentile latency compared with native scheduling strategy in OpenWhisk when there is worker contention between workloads, and does not add significant performance overhead. |
---|---|
ISSN: | 0038-0644 1097-024X |
DOI: | 10.1002/spe.3016 |