Rethinking the Cloudonomics of Efficient I/O for Data-Intensive Analytics Applications
This paper explores a prevailing trend in the industry: migrating data-intensive analytics applications from on-premises to cloud-native environments. We find that the unique cost models associated with cloud-based storage necessitate a more nuanced understanding of optimizing performance. Specifica...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper explores a prevailing trend in the industry: migrating
data-intensive analytics applications from on-premises to cloud-native
environments. We find that the unique cost models associated with cloud-based
storage necessitate a more nuanced understanding of optimizing performance.
Specifically, based on traces collected from Uber's Presto fleet in production,
we argue that common I/O optimizations, such as table scan and filter, and
broadcast join, may lead to unexpected costs when naively applied in the cloud.
This is because traditional I/O optimizations mainly focus on improving
throughput or latency in on-premises settings, without taking into account the
monetary costs associated with storage API calls. In cloud environments, these
costs can be significant, potentially involving billions of API calls per day
just for Presto workloads at Uber scale. Presented as a case study, this paper
serves as a starting point for further research to design efficient I/O
strategies specifically tailored for data-intensive applications in cloud
settings. |
---|---|
DOI: | 10.48550/arxiv.2311.00156 |