Locality vs. criticality

Current memory hierarchies exploit locality of references to reduce load latency and thereby improve processor performance. Locality based schemes aim at reducing the number of cache misses and tend to ignore the nature of misses. This leads to a potential mis-match between load latency requirements...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Srinivasan, S.T., Dz-Ching Ju, R., Lebeck, A.R., Wilkerson, C.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 143
container_issue
container_start_page 132
container_title
container_volume
creator Srinivasan, S.T.
Dz-Ching Ju, R.
Lebeck, A.R.
Wilkerson, C.
description Current memory hierarchies exploit locality of references to reduce load latency and thereby improve processor performance. Locality based schemes aim at reducing the number of cache misses and tend to ignore the nature of misses. This leads to a potential mis-match between load latency requirements and latencies realized using a traditional memory system. To bridge this gap, we partition loads as critical and non-critical. A load that needs to complete early to prevent processor stalls is classified as critical, while a load that can tolerate a long latency is considered non-critical. In this paper, we investigate if it is worth violating locality to exploit information on criticality to improve processor performance. We present a dynamic critical load classification scheme and show that 40% performance improvements are possible on average, if all critical loads are guaranteed to hit in the LI cache. We then compare the two properties, locality and criticality, in the context of several cache organization and prefetching schemes. We find that the working set of critical loads is large, and hence practical cache organization schemes based on criticality are unable to reduce the critical load miss ratios enough to produce performance gains. Although criticality-based prefetching can help for some resource constrained programs, its benefit over locality-based prefetching is small and may not be worth the added complexity.
doi_str_mv 10.1109/ISCA.2001.937442
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_937442</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>937442</ieee_id><sourcerecordid>937442</sourcerecordid><originalsourceid>FETCH-LOGICAL-i87t-f27bf0d275cab5c736ec88ed05d4b75ee26075a901336696fd2da04acb04d62b3</originalsourceid><addsrcrecordid>eNotj0tLw0AURi9VoWl1X1z1D0y887o3syzBRyHgwi7clXkFplSUJBT67xXi6uMszoEPYCOxlhLd0_6j3dUKUdZOszFqAZWybAVL_XkDK2RyVkpSfAuVRNKCGsdLWI3j6U9yzlIFm-47-nOZrtvLWG_jUKYy8z3c9f485of_XcPh5fnQvonu_XXf7jpRGp5Erzj0mBTb6IONrCnHpskJbTKBbc6KkK13KLUmctQnlTwaHwOaRCroNTzO2ZJzPv4M5csP1-P8R_8CKSI7pA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Locality vs. criticality</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Srinivasan, S.T. ; Dz-Ching Ju, R. ; Lebeck, A.R. ; Wilkerson, C.</creator><creatorcontrib>Srinivasan, S.T. ; Dz-Ching Ju, R. ; Lebeck, A.R. ; Wilkerson, C.</creatorcontrib><description>Current memory hierarchies exploit locality of references to reduce load latency and thereby improve processor performance. Locality based schemes aim at reducing the number of cache misses and tend to ignore the nature of misses. This leads to a potential mis-match between load latency requirements and latencies realized using a traditional memory system. To bridge this gap, we partition loads as critical and non-critical. A load that needs to complete early to prevent processor stalls is classified as critical, while a load that can tolerate a long latency is considered non-critical. In this paper, we investigate if it is worth violating locality to exploit information on criticality to improve processor performance. We present a dynamic critical load classification scheme and show that 40% performance improvements are possible on average, if all critical loads are guaranteed to hit in the LI cache. We then compare the two properties, locality and criticality, in the context of several cache organization and prefetching schemes. We find that the working set of critical loads is large, and hence practical cache organization schemes based on criticality are unable to reduce the critical load miss ratios enough to produce performance gains. Although criticality-based prefetching can help for some resource constrained programs, its benefit over locality-based prefetching is small and may not be worth the added complexity.</description><identifier>ISSN: 1063-6897</identifier><identifier>ISBN: 0769511627</identifier><identifier>ISBN: 9780769511627</identifier><identifier>EISSN: 2575-713X</identifier><identifier>DOI: 10.1109/ISCA.2001.937442</identifier><language>eng</language><publisher>IEEE</publisher><subject>Bridges ; Cache memory ; Computer science ; Delay ; Hardware ; Memory management ; Microprocessors ; Performance gain ; Prefetching ; Random access memory</subject><ispartof>Proceedings 28th Annual International Symposium on Computer Architecture, 2001, p.132-143</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/937442$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,777,781,786,787,2052,4036,4037,27906,54901</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/937442$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Srinivasan, S.T.</creatorcontrib><creatorcontrib>Dz-Ching Ju, R.</creatorcontrib><creatorcontrib>Lebeck, A.R.</creatorcontrib><creatorcontrib>Wilkerson, C.</creatorcontrib><title>Locality vs. criticality</title><title>Proceedings 28th Annual International Symposium on Computer Architecture</title><addtitle>ISCA</addtitle><description>Current memory hierarchies exploit locality of references to reduce load latency and thereby improve processor performance. Locality based schemes aim at reducing the number of cache misses and tend to ignore the nature of misses. This leads to a potential mis-match between load latency requirements and latencies realized using a traditional memory system. To bridge this gap, we partition loads as critical and non-critical. A load that needs to complete early to prevent processor stalls is classified as critical, while a load that can tolerate a long latency is considered non-critical. In this paper, we investigate if it is worth violating locality to exploit information on criticality to improve processor performance. We present a dynamic critical load classification scheme and show that 40% performance improvements are possible on average, if all critical loads are guaranteed to hit in the LI cache. We then compare the two properties, locality and criticality, in the context of several cache organization and prefetching schemes. We find that the working set of critical loads is large, and hence practical cache organization schemes based on criticality are unable to reduce the critical load miss ratios enough to produce performance gains. Although criticality-based prefetching can help for some resource constrained programs, its benefit over locality-based prefetching is small and may not be worth the added complexity.</description><subject>Bridges</subject><subject>Cache memory</subject><subject>Computer science</subject><subject>Delay</subject><subject>Hardware</subject><subject>Memory management</subject><subject>Microprocessors</subject><subject>Performance gain</subject><subject>Prefetching</subject><subject>Random access memory</subject><issn>1063-6897</issn><issn>2575-713X</issn><isbn>0769511627</isbn><isbn>9780769511627</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2001</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotj0tLw0AURi9VoWl1X1z1D0y887o3syzBRyHgwi7clXkFplSUJBT67xXi6uMszoEPYCOxlhLd0_6j3dUKUdZOszFqAZWybAVL_XkDK2RyVkpSfAuVRNKCGsdLWI3j6U9yzlIFm-47-nOZrtvLWG_jUKYy8z3c9f485of_XcPh5fnQvonu_XXf7jpRGp5Erzj0mBTb6IONrCnHpskJbTKBbc6KkK13KLUmctQnlTwaHwOaRCroNTzO2ZJzPv4M5csP1-P8R_8CKSI7pA</recordid><startdate>2001</startdate><enddate>2001</enddate><creator>Srinivasan, S.T.</creator><creator>Dz-Ching Ju, R.</creator><creator>Lebeck, A.R.</creator><creator>Wilkerson, C.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>2001</creationdate><title>Locality vs. criticality</title><author>Srinivasan, S.T. ; Dz-Ching Ju, R. ; Lebeck, A.R. ; Wilkerson, C.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i87t-f27bf0d275cab5c736ec88ed05d4b75ee26075a901336696fd2da04acb04d62b3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2001</creationdate><topic>Bridges</topic><topic>Cache memory</topic><topic>Computer science</topic><topic>Delay</topic><topic>Hardware</topic><topic>Memory management</topic><topic>Microprocessors</topic><topic>Performance gain</topic><topic>Prefetching</topic><topic>Random access memory</topic><toplevel>online_resources</toplevel><creatorcontrib>Srinivasan, S.T.</creatorcontrib><creatorcontrib>Dz-Ching Ju, R.</creatorcontrib><creatorcontrib>Lebeck, A.R.</creatorcontrib><creatorcontrib>Wilkerson, C.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Srinivasan, S.T.</au><au>Dz-Ching Ju, R.</au><au>Lebeck, A.R.</au><au>Wilkerson, C.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Locality vs. criticality</atitle><btitle>Proceedings 28th Annual International Symposium on Computer Architecture</btitle><stitle>ISCA</stitle><date>2001</date><risdate>2001</risdate><spage>132</spage><epage>143</epage><pages>132-143</pages><issn>1063-6897</issn><eissn>2575-713X</eissn><isbn>0769511627</isbn><isbn>9780769511627</isbn><abstract>Current memory hierarchies exploit locality of references to reduce load latency and thereby improve processor performance. Locality based schemes aim at reducing the number of cache misses and tend to ignore the nature of misses. This leads to a potential mis-match between load latency requirements and latencies realized using a traditional memory system. To bridge this gap, we partition loads as critical and non-critical. A load that needs to complete early to prevent processor stalls is classified as critical, while a load that can tolerate a long latency is considered non-critical. In this paper, we investigate if it is worth violating locality to exploit information on criticality to improve processor performance. We present a dynamic critical load classification scheme and show that 40% performance improvements are possible on average, if all critical loads are guaranteed to hit in the LI cache. We then compare the two properties, locality and criticality, in the context of several cache organization and prefetching schemes. We find that the working set of critical loads is large, and hence practical cache organization schemes based on criticality are unable to reduce the critical load miss ratios enough to produce performance gains. Although criticality-based prefetching can help for some resource constrained programs, its benefit over locality-based prefetching is small and may not be worth the added complexity.</abstract><pub>IEEE</pub><doi>10.1109/ISCA.2001.937442</doi><tpages>12</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1063-6897
ispartof Proceedings 28th Annual International Symposium on Computer Architecture, 2001, p.132-143
issn 1063-6897
2575-713X
language eng
recordid cdi_ieee_primary_937442
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Bridges
Cache memory
Computer science
Delay
Hardware
Memory management
Microprocessors
Performance gain
Prefetching
Random access memory
title Locality vs. criticality
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T15%3A08%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Locality%20vs.%20criticality&rft.btitle=Proceedings%2028th%20Annual%20International%20Symposium%20on%20Computer%20Architecture&rft.au=Srinivasan,%20S.T.&rft.date=2001&rft.spage=132&rft.epage=143&rft.pages=132-143&rft.issn=1063-6897&rft.eissn=2575-713X&rft.isbn=0769511627&rft.isbn_list=9780769511627&rft_id=info:doi/10.1109/ISCA.2001.937442&rft_dat=%3Cieee_6IE%3E937442%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=937442&rfr_iscdi=true