Compiler-based cache line optimization

Generally, a microprocessor operates much faster than main memory can supply data to the microprocessor. Therefore, many computer systems temporarily store recently and frequently used data in smaller, but much faster cache memory. Cache memory may reside directly on the microprocessor chip (Level 1...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Kosche, Nicolai
Format: Patent
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kosche, Nicolai
description Generally, a microprocessor operates much faster than main memory can supply data to the microprocessor. Therefore, many computer systems temporarily store recently and frequently used data in smaller, but much faster cache memory. Cache memory may reside directly on the microprocessor chip (Level 1 cache) or may be external to the microprocessor (Level 2 cache). In the past, on-chip cache memory was relatively small, 8 or 16 kilobytes (KB); however, more recent microprocessor designs have on-chip cache memories of 256 and even 512 KB. Cache line optimization involves computing where cache misses are in a control flow and assigning probabilities to cache misses. Cache lines may be scheduled based on the assigned probabilities and where the cache misses are in the control flow. Cache line probabilities may be calculated based on the relationship of the cache line and where the cache misses are in the control flow. A control flow may be pruned before calculating cache line probabilities. Function call sites may be used to prune the control flow. Address generation of a cache miss may be duplicated to speculatively hoist address generation and the associated prefetch. References may be selected for optimization, identifying cache lines, and mapping the selected references. Dependencies within the cache lines may be determined and the cache lines may be scheduled based on the determined dependencies and probabilities of usefulness. Instructions may be scheduled based on the scheduled cache lines and the target machine model to maximize outstanding memory transactions. Cache lines may be scheduled across call sites.
format Patent
fullrecord <record><control><sourceid>uspatents_EFH</sourceid><recordid>TN_cdi_uspatents_grants_06564297</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>06564297</sourcerecordid><originalsourceid>FETCH-uspatents_grants_065642973</originalsourceid><addsrcrecordid>eNrjZFBzzs8tyMxJLdJNSixOTVFITkzOSFXIycxLVcgvKMnMzaxKLMnMz-NhYE1LzClO5YXS3AwKbq4hzh66pcUFiSWpeSXF8elFiSDKwMzUzMTI0tyYCCUAUDsokw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Compiler-based cache line optimization</title><source>USPTO Issued Patents</source><creator>Kosche, Nicolai</creator><creatorcontrib>Kosche, Nicolai ; Sun Microsystems, Inc</creatorcontrib><description>Generally, a microprocessor operates much faster than main memory can supply data to the microprocessor. Therefore, many computer systems temporarily store recently and frequently used data in smaller, but much faster cache memory. Cache memory may reside directly on the microprocessor chip (Level 1 cache) or may be external to the microprocessor (Level 2 cache). In the past, on-chip cache memory was relatively small, 8 or 16 kilobytes (KB); however, more recent microprocessor designs have on-chip cache memories of 256 and even 512 KB. Cache line optimization involves computing where cache misses are in a control flow and assigning probabilities to cache misses. Cache lines may be scheduled based on the assigned probabilities and where the cache misses are in the control flow. Cache line probabilities may be calculated based on the relationship of the cache line and where the cache misses are in the control flow. A control flow may be pruned before calculating cache line probabilities. Function call sites may be used to prune the control flow. Address generation of a cache miss may be duplicated to speculatively hoist address generation and the associated prefetch. References may be selected for optimization, identifying cache lines, and mapping the selected references. Dependencies within the cache lines may be determined and the cache lines may be scheduled based on the determined dependencies and probabilities of usefulness. Instructions may be scheduled based on the scheduled cache lines and the target machine model to maximize outstanding memory transactions. Cache lines may be scheduled across call sites.</description><language>eng</language><creationdate>2003</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/6564297$$EPDF$$P50$$Guspatents$$Hfree_for_read</linktopdf><link.rule.ids>230,308,776,798,881,64015</link.rule.ids><linktorsrc>$$Uhttps://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/6564297$$EView_record_in_USPTO$$FView_record_in_$$GUSPTO$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Kosche, Nicolai</creatorcontrib><creatorcontrib>Sun Microsystems, Inc</creatorcontrib><title>Compiler-based cache line optimization</title><description>Generally, a microprocessor operates much faster than main memory can supply data to the microprocessor. Therefore, many computer systems temporarily store recently and frequently used data in smaller, but much faster cache memory. Cache memory may reside directly on the microprocessor chip (Level 1 cache) or may be external to the microprocessor (Level 2 cache). In the past, on-chip cache memory was relatively small, 8 or 16 kilobytes (KB); however, more recent microprocessor designs have on-chip cache memories of 256 and even 512 KB. Cache line optimization involves computing where cache misses are in a control flow and assigning probabilities to cache misses. Cache lines may be scheduled based on the assigned probabilities and where the cache misses are in the control flow. Cache line probabilities may be calculated based on the relationship of the cache line and where the cache misses are in the control flow. A control flow may be pruned before calculating cache line probabilities. Function call sites may be used to prune the control flow. Address generation of a cache miss may be duplicated to speculatively hoist address generation and the associated prefetch. References may be selected for optimization, identifying cache lines, and mapping the selected references. Dependencies within the cache lines may be determined and the cache lines may be scheduled based on the determined dependencies and probabilities of usefulness. Instructions may be scheduled based on the scheduled cache lines and the target machine model to maximize outstanding memory transactions. Cache lines may be scheduled across call sites.</description><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2003</creationdate><recordtype>patent</recordtype><sourceid>EFH</sourceid><recordid>eNrjZFBzzs8tyMxJLdJNSixOTVFITkzOSFXIycxLVcgvKMnMzaxKLMnMz-NhYE1LzClO5YXS3AwKbq4hzh66pcUFiSWpeSXF8elFiSDKwMzUzMTI0tyYCCUAUDsokw</recordid><startdate>20030513</startdate><enddate>20030513</enddate><creator>Kosche, Nicolai</creator><scope>EFH</scope></search><sort><creationdate>20030513</creationdate><title>Compiler-based cache line optimization</title><author>Kosche, Nicolai</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-uspatents_grants_065642973</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2003</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Kosche, Nicolai</creatorcontrib><creatorcontrib>Sun Microsystems, Inc</creatorcontrib><collection>USPTO Issued Patents</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kosche, Nicolai</au><aucorp>Sun Microsystems, Inc</aucorp><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Compiler-based cache line optimization</title><date>2003-05-13</date><risdate>2003</risdate><abstract>Generally, a microprocessor operates much faster than main memory can supply data to the microprocessor. Therefore, many computer systems temporarily store recently and frequently used data in smaller, but much faster cache memory. Cache memory may reside directly on the microprocessor chip (Level 1 cache) or may be external to the microprocessor (Level 2 cache). In the past, on-chip cache memory was relatively small, 8 or 16 kilobytes (KB); however, more recent microprocessor designs have on-chip cache memories of 256 and even 512 KB. Cache line optimization involves computing where cache misses are in a control flow and assigning probabilities to cache misses. Cache lines may be scheduled based on the assigned probabilities and where the cache misses are in the control flow. Cache line probabilities may be calculated based on the relationship of the cache line and where the cache misses are in the control flow. A control flow may be pruned before calculating cache line probabilities. Function call sites may be used to prune the control flow. Address generation of a cache miss may be duplicated to speculatively hoist address generation and the associated prefetch. References may be selected for optimization, identifying cache lines, and mapping the selected references. Dependencies within the cache lines may be determined and the cache lines may be scheduled based on the determined dependencies and probabilities of usefulness. Instructions may be scheduled based on the scheduled cache lines and the target machine model to maximize outstanding memory transactions. Cache lines may be scheduled across call sites.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_uspatents_grants_06564297
source USPTO Issued Patents
title Compiler-based cache line optimization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T08%3A25%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-uspatents_EFH&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Kosche,%20Nicolai&rft.aucorp=Sun%20Microsystems,%20Inc&rft.date=2003-05-13&rft_id=info:doi/&rft_dat=%3Cuspatents_EFH%3E06564297%3C/uspatents_EFH%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true