Data For: Application Performance Monitoring: Trade-Off Between Overhead Reduction And Maintainability

Monitoring of a software system provides insights into its runtime behavior, improving system analysis and comprehension. System-level monitoring approaches focus, e.g., on network monitoring, providing information on externally visible system behavior. Application-level performance monitoring frame...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Waller, Jan, Fittkau, Florian, Hasselbring, Wilhelm
Format: Dataset
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Waller, Jan
Fittkau, Florian
Hasselbring, Wilhelm
description Monitoring of a software system provides insights into its runtime behavior, improving system analysis and comprehension. System-level monitoring approaches focus, e.g., on network monitoring, providing information on externally visible system behavior. Application-level performance monitoring frameworks, such as Kieker or Dapper, allow to observe the internal application behavior, but introduce runtime overhead depending on the number of instrumentation probes. We report on how we were able to significantly reduce the runtime overhead of the Kieker monitoring framework. For achieving this optimization, we employed micro-benchmarks with a structured performance engineering approach. During optimization, we kept track of the impact on maintainability of the framework. In this paper, we discuss the emerged trade-off between performance and maintainability in this context. To the best of our knowledge, publications on monitoring frameworks provide none or only weak performance evaluations, making comparisons cumbersome. However, our micro-benchmark, presented in this paper, provides a basis for such comparisons. Our experiment code and data are available as open source software such that interested researchers may repeat or extend our experiments for comparison on other hardware platforms or with other monitoring frameworks. This dataset supplements the paper and contains the raw experimental data as well as several generated diagrams for each experiment.
doi_str_mv 10.5281/zenodo.11428
format Dataset
fullrecord <record><control><sourceid>datacite_PQ8</sourceid><recordid>TN_cdi_datacite_primary_10_5281_zenodo_11428</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_5281_zenodo_11428</sourcerecordid><originalsourceid>FETCH-LOGICAL-d758-24d6684788875325671555ac755969ed8fd606b0be2cd8a9e569f2031c97fce43</originalsourceid><addsrcrecordid>eNotz8tOwzAUBFBvWKDCjg_wB5ASO_Ej3YVCAalVEMo-urGvwVJrR8aAytfTB4vRrGakQ8gNK-eCa3b3iyHaOGes5vqSuAfIQFcxLWg7TVtvIPsY6CsmF9MOgkG6icHnmHx4X9A-gcWic47eY_5BDLT7xvSBYOkb2i9zGrfB0g34kA-B0W993l-RCwfbT7z-7xnpV4_98rlYd08vy3ZdWCV0wWsrpa6V1lqJigupmBACjBKikQ1a7aws5ViOyI3V0KCQjeNlxUyjnMG6mpHb8609qIzPOEzJ7yDtB1YOR_1w1g8nffUH02xUow</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>dataset</recordtype></control><display><type>dataset</type><title>Data For: Application Performance Monitoring: Trade-Off Between Overhead Reduction And Maintainability</title><source>DataCite</source><creator>Waller, Jan ; Fittkau, Florian ; Hasselbring, Wilhelm</creator><creatorcontrib>Waller, Jan ; Fittkau, Florian ; Hasselbring, Wilhelm</creatorcontrib><description>Monitoring of a software system provides insights into its runtime behavior, improving system analysis and comprehension. System-level monitoring approaches focus, e.g., on network monitoring, providing information on externally visible system behavior. Application-level performance monitoring frameworks, such as Kieker or Dapper, allow to observe the internal application behavior, but introduce runtime overhead depending on the number of instrumentation probes. We report on how we were able to significantly reduce the runtime overhead of the Kieker monitoring framework. For achieving this optimization, we employed micro-benchmarks with a structured performance engineering approach. During optimization, we kept track of the impact on maintainability of the framework. In this paper, we discuss the emerged trade-off between performance and maintainability in this context. To the best of our knowledge, publications on monitoring frameworks provide none or only weak performance evaluations, making comparisons cumbersome. However, our micro-benchmark, presented in this paper, provides a basis for such comparisons. Our experiment code and data are available as open source software such that interested researchers may repeat or extend our experiments for comparison on other hardware platforms or with other monitoring frameworks. This dataset supplements the paper and contains the raw experimental data as well as several generated diagrams for each experiment.</description><identifier>DOI: 10.5281/zenodo.11428</identifier><language>eng</language><publisher>Zenodo</publisher><subject>Benchmarking ; Kieker ; MooBench ; Software Performance Engineering</subject><creationdate>2014</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,1892</link.rule.ids><linktorsrc>$$Uhttps://commons.datacite.org/doi.org/10.5281/zenodo.11428$$EView_record_in_DataCite.org$$FView_record_in_$$GDataCite.org$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Waller, Jan</creatorcontrib><creatorcontrib>Fittkau, Florian</creatorcontrib><creatorcontrib>Hasselbring, Wilhelm</creatorcontrib><title>Data For: Application Performance Monitoring: Trade-Off Between Overhead Reduction And Maintainability</title><description>Monitoring of a software system provides insights into its runtime behavior, improving system analysis and comprehension. System-level monitoring approaches focus, e.g., on network monitoring, providing information on externally visible system behavior. Application-level performance monitoring frameworks, such as Kieker or Dapper, allow to observe the internal application behavior, but introduce runtime overhead depending on the number of instrumentation probes. We report on how we were able to significantly reduce the runtime overhead of the Kieker monitoring framework. For achieving this optimization, we employed micro-benchmarks with a structured performance engineering approach. During optimization, we kept track of the impact on maintainability of the framework. In this paper, we discuss the emerged trade-off between performance and maintainability in this context. To the best of our knowledge, publications on monitoring frameworks provide none or only weak performance evaluations, making comparisons cumbersome. However, our micro-benchmark, presented in this paper, provides a basis for such comparisons. Our experiment code and data are available as open source software such that interested researchers may repeat or extend our experiments for comparison on other hardware platforms or with other monitoring frameworks. This dataset supplements the paper and contains the raw experimental data as well as several generated diagrams for each experiment.</description><subject>Benchmarking</subject><subject>Kieker</subject><subject>MooBench</subject><subject>Software Performance Engineering</subject><fulltext>true</fulltext><rsrctype>dataset</rsrctype><creationdate>2014</creationdate><recordtype>dataset</recordtype><sourceid>PQ8</sourceid><recordid>eNotz8tOwzAUBFBvWKDCjg_wB5ASO_Ej3YVCAalVEMo-urGvwVJrR8aAytfTB4vRrGakQ8gNK-eCa3b3iyHaOGes5vqSuAfIQFcxLWg7TVtvIPsY6CsmF9MOgkG6icHnmHx4X9A-gcWic47eY_5BDLT7xvSBYOkb2i9zGrfB0g34kA-B0W993l-RCwfbT7z-7xnpV4_98rlYd08vy3ZdWCV0wWsrpa6V1lqJigupmBACjBKikQ1a7aws5ViOyI3V0KCQjeNlxUyjnMG6mpHb8609qIzPOEzJ7yDtB1YOR_1w1g8nffUH02xUow</recordid><startdate>20141126</startdate><enddate>20141126</enddate><creator>Waller, Jan</creator><creator>Fittkau, Florian</creator><creator>Hasselbring, Wilhelm</creator><general>Zenodo</general><scope>DYCCY</scope><scope>PQ8</scope></search><sort><creationdate>20141126</creationdate><title>Data For: Application Performance Monitoring: Trade-Off Between Overhead Reduction And Maintainability</title><author>Waller, Jan ; Fittkau, Florian ; Hasselbring, Wilhelm</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-d758-24d6684788875325671555ac755969ed8fd606b0be2cd8a9e569f2031c97fce43</frbrgroupid><rsrctype>datasets</rsrctype><prefilter>datasets</prefilter><language>eng</language><creationdate>2014</creationdate><topic>Benchmarking</topic><topic>Kieker</topic><topic>MooBench</topic><topic>Software Performance Engineering</topic><toplevel>online_resources</toplevel><creatorcontrib>Waller, Jan</creatorcontrib><creatorcontrib>Fittkau, Florian</creatorcontrib><creatorcontrib>Hasselbring, Wilhelm</creatorcontrib><collection>DataCite (Open Access)</collection><collection>DataCite</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Waller, Jan</au><au>Fittkau, Florian</au><au>Hasselbring, Wilhelm</au><format>book</format><genre>unknown</genre><ristype>DATA</ristype><title>Data For: Application Performance Monitoring: Trade-Off Between Overhead Reduction And Maintainability</title><date>2014-11-26</date><risdate>2014</risdate><abstract>Monitoring of a software system provides insights into its runtime behavior, improving system analysis and comprehension. System-level monitoring approaches focus, e.g., on network monitoring, providing information on externally visible system behavior. Application-level performance monitoring frameworks, such as Kieker or Dapper, allow to observe the internal application behavior, but introduce runtime overhead depending on the number of instrumentation probes. We report on how we were able to significantly reduce the runtime overhead of the Kieker monitoring framework. For achieving this optimization, we employed micro-benchmarks with a structured performance engineering approach. During optimization, we kept track of the impact on maintainability of the framework. In this paper, we discuss the emerged trade-off between performance and maintainability in this context. To the best of our knowledge, publications on monitoring frameworks provide none or only weak performance evaluations, making comparisons cumbersome. However, our micro-benchmark, presented in this paper, provides a basis for such comparisons. Our experiment code and data are available as open source software such that interested researchers may repeat or extend our experiments for comparison on other hardware platforms or with other monitoring frameworks. This dataset supplements the paper and contains the raw experimental data as well as several generated diagrams for each experiment.</abstract><pub>Zenodo</pub><doi>10.5281/zenodo.11428</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.5281/zenodo.11428
ispartof
issn
language eng
recordid cdi_datacite_primary_10_5281_zenodo_11428
source DataCite
subjects Benchmarking
Kieker
MooBench
Software Performance Engineering
title Data For: Application Performance Monitoring: Trade-Off Between Overhead Reduction And Maintainability
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T20%3A11%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-datacite_PQ8&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=unknown&rft.au=Waller,%20Jan&rft.date=2014-11-26&rft_id=info:doi/10.5281/zenodo.11428&rft_dat=%3Cdatacite_PQ8%3E10_5281_zenodo_11428%3C/datacite_PQ8%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true