Measuring Experimental Error in Microprocessor Simulation

We measure the experimental error that arises from the use of non-validated simulators in computer architecture research, with the goal of increasing the rigor of simulation- based studies. We describe the methodology that we used to validate a microprocessor simulator against a Compaq DS-10L workst...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Desikan, Rajagopalan, Burger, Doug, Keckler, Stephen W.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 277
container_issue
container_start_page 266
container_title
container_volume
creator Desikan, Rajagopalan
Burger, Doug
Keckler, Stephen W.
description We measure the experimental error that arises from the use of non-validated simulators in computer architecture research, with the goal of increasing the rigor of simulation- based studies. We describe the methodology that we used to validate a microprocessor simulator against a Compaq DS-10L workstation, which contains an Alpha 21264 processor. Our evaluation suite consists of a set of 21 microbenchmarks that stress different aspects of the 21264 microarchitecture. Using the microbenchmark suite as the set of workloads, we describe how we reduced our simulator error to an arithmetic mean of 2%, and include details about the specific aspects of the pipeline that required extra care to reduce the error. We show how these low-level optimizations reduce average error from 40% to less than 20% on macrobenchmarks drawn from the SPEC2000 suite. Finally, we examine the degree to which performance optimizations are stable across different simulators, showing that researchers would draw different conclusions, in some cases, if using validated simulators.
doi_str_mv 10.1145/379240.565338
format Conference Proceeding
fullrecord <record><control><sourceid>acm</sourceid><recordid>TN_cdi_acm_books_10_1145_379240_565338</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>acm_books_10_1145_379240_565338</sourcerecordid><originalsourceid>FETCH-LOGICAL-a252t-1224d74b9a2a7037786e3aa1640f41edc9c54804ddf59d1d9b0cfd090bea1fe43</originalsourceid><addsrcrecordid>eNqNj0tLxDAYRQMiqOMs3Xflyo758miapQzVEWZwoa5DnhJtmyHpgD_fSv0B3s2Fy-HCQegG8AaA8XsqJGF4wxtOaXuGrrBoJAdoiLhA61I-8RzGQUh5ieTB63LKcfyouu-jz3Hw46T7qss55SqO1SHanI45WV_KvLzG4dTrKabxGp0H3Re__usVen_s3ra7ev_y9Lx92NeacDLVQAhzghmpiRaYCtE2nmoNDcOBgXdWWs5azJwLXDpw0mAbHJbYeA3BM7pCt8uvtoMyKX0VBVj9iqpFVC2iM3j3L1CZHH2gPxsSVBs</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Measuring Experimental Error in Microprocessor Simulation</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Desikan, Rajagopalan ; Burger, Doug ; Keckler, Stephen W.</creator><creatorcontrib>Desikan, Rajagopalan ; Burger, Doug ; Keckler, Stephen W.</creatorcontrib><description>We measure the experimental error that arises from the use of non-validated simulators in computer architecture research, with the goal of increasing the rigor of simulation- based studies. We describe the methodology that we used to validate a microprocessor simulator against a Compaq DS-10L workstation, which contains an Alpha 21264 processor. Our evaluation suite consists of a set of 21 microbenchmarks that stress different aspects of the 21264 microarchitecture. Using the microbenchmark suite as the set of workloads, we describe how we reduced our simulator error to an arithmetic mean of 2%, and include details about the specific aspects of the pipeline that required extra care to reduce the error. We show how these low-level optimizations reduce average error from 40% to less than 20% on macrobenchmarks drawn from the SPEC2000 suite. Finally, we examine the degree to which performance optimizations are stable across different simulators, showing that researchers would draw different conclusions, in some cases, if using validated simulators.</description><identifier>ISBN: 0769511627</identifier><identifier>ISBN: 9780769511627</identifier><identifier>DOI: 10.1145/379240.565338</identifier><language>eng</language><publisher>New York, NY, USA: ACM</publisher><subject>Applied computing -- Computers in other domains -- Personal computers and PC applications -- Microcomputers ; Computing methodologies -- Modeling and simulation -- Model development and analysis -- Modeling methodologies ; General and reference -- Cross-computing tools and techniques -- Measurement ; General and reference -- Cross-computing tools and techniques -- Metrics</subject><ispartof>Proceedings of the 28th annual international symposium on Computer architecture, 2001, p.266-277</ispartof><rights>2001 ACM</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>309,310,776,780,785,786,27902</link.rule.ids></links><search><creatorcontrib>Desikan, Rajagopalan</creatorcontrib><creatorcontrib>Burger, Doug</creatorcontrib><creatorcontrib>Keckler, Stephen W.</creatorcontrib><title>Measuring Experimental Error in Microprocessor Simulation</title><title>Proceedings of the 28th annual international symposium on Computer architecture</title><description>We measure the experimental error that arises from the use of non-validated simulators in computer architecture research, with the goal of increasing the rigor of simulation- based studies. We describe the methodology that we used to validate a microprocessor simulator against a Compaq DS-10L workstation, which contains an Alpha 21264 processor. Our evaluation suite consists of a set of 21 microbenchmarks that stress different aspects of the 21264 microarchitecture. Using the microbenchmark suite as the set of workloads, we describe how we reduced our simulator error to an arithmetic mean of 2%, and include details about the specific aspects of the pipeline that required extra care to reduce the error. We show how these low-level optimizations reduce average error from 40% to less than 20% on macrobenchmarks drawn from the SPEC2000 suite. Finally, we examine the degree to which performance optimizations are stable across different simulators, showing that researchers would draw different conclusions, in some cases, if using validated simulators.</description><subject>Applied computing -- Computers in other domains -- Personal computers and PC applications -- Microcomputers</subject><subject>Computing methodologies -- Modeling and simulation -- Model development and analysis -- Modeling methodologies</subject><subject>General and reference -- Cross-computing tools and techniques -- Measurement</subject><subject>General and reference -- Cross-computing tools and techniques -- Metrics</subject><isbn>0769511627</isbn><isbn>9780769511627</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2001</creationdate><recordtype>conference_proceeding</recordtype><sourceid/><recordid>eNqNj0tLxDAYRQMiqOMs3Xflyo758miapQzVEWZwoa5DnhJtmyHpgD_fSv0B3s2Fy-HCQegG8AaA8XsqJGF4wxtOaXuGrrBoJAdoiLhA61I-8RzGQUh5ieTB63LKcfyouu-jz3Hw46T7qss55SqO1SHanI45WV_KvLzG4dTrKabxGp0H3Re__usVen_s3ra7ev_y9Lx92NeacDLVQAhzghmpiRaYCtE2nmoNDcOBgXdWWs5azJwLXDpw0mAbHJbYeA3BM7pCt8uvtoMyKX0VBVj9iqpFVC2iM3j3L1CZHH2gPxsSVBs</recordid><startdate>20010101</startdate><enddate>20010101</enddate><creator>Desikan, Rajagopalan</creator><creator>Burger, Doug</creator><creator>Keckler, Stephen W.</creator><general>ACM</general><scope/></search><sort><creationdate>20010101</creationdate><title>Measuring Experimental Error in Microprocessor Simulation</title><author>Desikan, Rajagopalan ; Burger, Doug ; Keckler, Stephen W.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a252t-1224d74b9a2a7037786e3aa1640f41edc9c54804ddf59d1d9b0cfd090bea1fe43</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2001</creationdate><topic>Applied computing -- Computers in other domains -- Personal computers and PC applications -- Microcomputers</topic><topic>Computing methodologies -- Modeling and simulation -- Model development and analysis -- Modeling methodologies</topic><topic>General and reference -- Cross-computing tools and techniques -- Measurement</topic><topic>General and reference -- Cross-computing tools and techniques -- Metrics</topic><toplevel>online_resources</toplevel><creatorcontrib>Desikan, Rajagopalan</creatorcontrib><creatorcontrib>Burger, Doug</creatorcontrib><creatorcontrib>Keckler, Stephen W.</creatorcontrib></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Desikan, Rajagopalan</au><au>Burger, Doug</au><au>Keckler, Stephen W.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Measuring Experimental Error in Microprocessor Simulation</atitle><btitle>Proceedings of the 28th annual international symposium on Computer architecture</btitle><date>2001-01-01</date><risdate>2001</risdate><spage>266</spage><epage>277</epage><pages>266-277</pages><isbn>0769511627</isbn><isbn>9780769511627</isbn><abstract>We measure the experimental error that arises from the use of non-validated simulators in computer architecture research, with the goal of increasing the rigor of simulation- based studies. We describe the methodology that we used to validate a microprocessor simulator against a Compaq DS-10L workstation, which contains an Alpha 21264 processor. Our evaluation suite consists of a set of 21 microbenchmarks that stress different aspects of the 21264 microarchitecture. Using the microbenchmark suite as the set of workloads, we describe how we reduced our simulator error to an arithmetic mean of 2%, and include details about the specific aspects of the pipeline that required extra care to reduce the error. We show how these low-level optimizations reduce average error from 40% to less than 20% on macrobenchmarks drawn from the SPEC2000 suite. Finally, we examine the degree to which performance optimizations are stable across different simulators, showing that researchers would draw different conclusions, in some cases, if using validated simulators.</abstract><cop>New York, NY, USA</cop><pub>ACM</pub><doi>10.1145/379240.565338</doi><tpages>12</tpages></addata></record>
fulltext fulltext
identifier ISBN: 0769511627
ispartof Proceedings of the 28th annual international symposium on Computer architecture, 2001, p.266-277
issn
language eng
recordid cdi_acm_books_10_1145_379240_565338
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Applied computing -- Computers in other domains -- Personal computers and PC applications -- Microcomputers
Computing methodologies -- Modeling and simulation -- Model development and analysis -- Modeling methodologies
General and reference -- Cross-computing tools and techniques -- Measurement
General and reference -- Cross-computing tools and techniques -- Metrics
title Measuring Experimental Error in Microprocessor Simulation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T11%3A54%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Measuring%20Experimental%20Error%20in%20Microprocessor%20Simulation&rft.btitle=Proceedings%20of%20the%2028th%20annual%20international%20symposium%20on%20Computer%20architecture&rft.au=Desikan,%20Rajagopalan&rft.date=2001-01-01&rft.spage=266&rft.epage=277&rft.pages=266-277&rft.isbn=0769511627&rft.isbn_list=9780769511627&rft_id=info:doi/10.1145/379240.565338&rft_dat=%3Cacm%3Eacm_books_10_1145_379240_565338%3C/acm%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true