Fallout: Distributed Systems Testing as a Service
All modern distributed systems list performance and scalability as their core strengths. Given that optimal performance requires carefully selecting configuration options, and typical cluster sizes can range anywhere from 2 to 300 nodes, it is rare for any two clusters to be exactly the same. Valida...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-10 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Guy Bolton King McCarthy, Sean Pattabhiraman, Pushkala Luciani, Jake Fleming, Matt |
description | All modern distributed systems list performance and scalability as their core strengths. Given that optimal performance requires carefully selecting configuration options, and typical cluster sizes can range anywhere from 2 to 300 nodes, it is rare for any two clusters to be exactly the same. Validating the behavior and performance of distributed systems in this large configuration space is challenging without automation that stretches across the software stack. In this paper we present Fallout, an open-source distributed systems testing service that automatically provisions and configures distributed systems and clients, supports running a variety of workloads and benchmarks, and generates performance reports based on collected metrics for visual analysis. We have been running the Fallout service internally at DataStax for over 5 years and have recently open sourced it to support our work with Apache Cassandra, Pulsar, and other open source projects. We describe the architecture of Fallout along with the evolution of its design and the lessons we learned operating this service in a dynamic environment where teams work on different products and favor different benchmarking tools. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2581622780</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2581622780</sourcerecordid><originalsourceid>FETCH-proquest_journals_25816227803</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwdEvMyckvLbFScMksLinKTCotSU1RCK4sLknNLVYISS0uycxLV0gsVkhUCE4tKstMTuVhYE1LzClO5YXS3AzKbq4hzh66BUX5haVADfFZ-aVFeUCpeCNTC0MzIyNzCwNj4lQBAIzDMzg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2581622780</pqid></control><display><type>article</type><title>Fallout: Distributed Systems Testing as a Service</title><source>Open Access: Freely Accessible Journals by multiple vendors</source><creator>Guy Bolton King ; McCarthy, Sean ; Pattabhiraman, Pushkala ; Luciani, Jake ; Fleming, Matt</creator><creatorcontrib>Guy Bolton King ; McCarthy, Sean ; Pattabhiraman, Pushkala ; Luciani, Jake ; Fleming, Matt</creatorcontrib><description>All modern distributed systems list performance and scalability as their core strengths. Given that optimal performance requires carefully selecting configuration options, and typical cluster sizes can range anywhere from 2 to 300 nodes, it is rare for any two clusters to be exactly the same. Validating the behavior and performance of distributed systems in this large configuration space is challenging without automation that stretches across the software stack. In this paper we present Fallout, an open-source distributed systems testing service that automatically provisions and configures distributed systems and clients, supports running a variety of workloads and benchmarks, and generates performance reports based on collected metrics for visual analysis. We have been running the Fallout service internally at DataStax for over 5 years and have recently open sourced it to support our work with Apache Cassandra, Pulsar, and other open source projects. We describe the architecture of Fallout along with the evolution of its design and the lessons we learned operating this service in a dynamic environment where teams work on different products and favor different benchmarking tools.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer networks ; Configurations ; Fallout ; Pulsars ; Source code ; Workload</subject><ispartof>arXiv.org, 2021-10</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Guy Bolton King</creatorcontrib><creatorcontrib>McCarthy, Sean</creatorcontrib><creatorcontrib>Pattabhiraman, Pushkala</creatorcontrib><creatorcontrib>Luciani, Jake</creatorcontrib><creatorcontrib>Fleming, Matt</creatorcontrib><title>Fallout: Distributed Systems Testing as a Service</title><title>arXiv.org</title><description>All modern distributed systems list performance and scalability as their core strengths. Given that optimal performance requires carefully selecting configuration options, and typical cluster sizes can range anywhere from 2 to 300 nodes, it is rare for any two clusters to be exactly the same. Validating the behavior and performance of distributed systems in this large configuration space is challenging without automation that stretches across the software stack. In this paper we present Fallout, an open-source distributed systems testing service that automatically provisions and configures distributed systems and clients, supports running a variety of workloads and benchmarks, and generates performance reports based on collected metrics for visual analysis. We have been running the Fallout service internally at DataStax for over 5 years and have recently open sourced it to support our work with Apache Cassandra, Pulsar, and other open source projects. We describe the architecture of Fallout along with the evolution of its design and the lessons we learned operating this service in a dynamic environment where teams work on different products and favor different benchmarking tools.</description><subject>Computer networks</subject><subject>Configurations</subject><subject>Fallout</subject><subject>Pulsars</subject><subject>Source code</subject><subject>Workload</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwdEvMyckvLbFScMksLinKTCotSU1RCK4sLknNLVYISS0uycxLV0gsVkhUCE4tKstMTuVhYE1LzClO5YXS3AzKbq4hzh66BUX5haVADfFZ-aVFeUCpeCNTC0MzIyNzCwNj4lQBAIzDMzg</recordid><startdate>20211011</startdate><enddate>20211011</enddate><creator>Guy Bolton King</creator><creator>McCarthy, Sean</creator><creator>Pattabhiraman, Pushkala</creator><creator>Luciani, Jake</creator><creator>Fleming, Matt</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211011</creationdate><title>Fallout: Distributed Systems Testing as a Service</title><author>Guy Bolton King ; McCarthy, Sean ; Pattabhiraman, Pushkala ; Luciani, Jake ; Fleming, Matt</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25816227803</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer networks</topic><topic>Configurations</topic><topic>Fallout</topic><topic>Pulsars</topic><topic>Source code</topic><topic>Workload</topic><toplevel>online_resources</toplevel><creatorcontrib>Guy Bolton King</creatorcontrib><creatorcontrib>McCarthy, Sean</creatorcontrib><creatorcontrib>Pattabhiraman, Pushkala</creatorcontrib><creatorcontrib>Luciani, Jake</creatorcontrib><creatorcontrib>Fleming, Matt</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Guy Bolton King</au><au>McCarthy, Sean</au><au>Pattabhiraman, Pushkala</au><au>Luciani, Jake</au><au>Fleming, Matt</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Fallout: Distributed Systems Testing as a Service</atitle><jtitle>arXiv.org</jtitle><date>2021-10-11</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>All modern distributed systems list performance and scalability as their core strengths. Given that optimal performance requires carefully selecting configuration options, and typical cluster sizes can range anywhere from 2 to 300 nodes, it is rare for any two clusters to be exactly the same. Validating the behavior and performance of distributed systems in this large configuration space is challenging without automation that stretches across the software stack. In this paper we present Fallout, an open-source distributed systems testing service that automatically provisions and configures distributed systems and clients, supports running a variety of workloads and benchmarks, and generates performance reports based on collected metrics for visual analysis. We have been running the Fallout service internally at DataStax for over 5 years and have recently open sourced it to support our work with Apache Cassandra, Pulsar, and other open source projects. We describe the architecture of Fallout along with the evolution of its design and the lessons we learned operating this service in a dynamic environment where teams work on different products and favor different benchmarking tools.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2581622780 |
source | Open Access: Freely Accessible Journals by multiple vendors |
subjects | Computer networks Configurations Fallout Pulsars Source code Workload |
title | Fallout: Distributed Systems Testing as a Service |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T19%3A20%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Fallout:%20Distributed%20Systems%20Testing%20as%20a%20Service&rft.jtitle=arXiv.org&rft.au=Guy%20Bolton%20King&rft.date=2021-10-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2581622780%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2581622780&rft_id=info:pmid/&rfr_iscdi=true |