SYSTEM FOR HIERARCHIZING CACHE MEMORY
PURPOSE:To suppress the increase of the hardware quantity, and also, to cope with a high speed conversion of a CPU, by providing a cache memory for storing an operand, and a cache memory for storing (n) pieces of continuous instructions. CONSTITUTION:In a first cache memory 11, an operand is stored,...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | OMORI YUZO |
description | PURPOSE:To suppress the increase of the hardware quantity, and also, to cope with a high speed conversion of a CPU, by providing a cache memory for storing an operand, and a cache memory for storing (n) pieces of continuous instructions. CONSTITUTION:In a first cache memory 11, an operand is stored, and in a second cache memory 12, (n) pieces (n is a positive integer of >=2) of continuous instructions can be stored. In this state, in case of prefetching a precedence instruction in parallel with the execution of an instruction by an instruction execution control part 20, the memory 12 is brought to an access by an instruction address held in a precedence instruction address register 15. Also, in case of being brought to an access through an operand address generating circuit 18, the memory 11 is brought to an access. That is, at the time of an operand access, the memory 11 whose capacity is not large is used, and at the time of an instruction access, continuous instructions are read out by using the memory 12 of a large capacity, and by executing successively its instructions, the increase of the hardware quantity is suppressed, and it is possible to cope with the high speed conversion of a CPU. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_JPS6450126A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>JPS6450126A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_JPS6450126A3</originalsourceid><addsrcrecordid>eNrjZFANjgwOcfVVcPMPUvDwdA1yDHL28Izy9HNXcHZ09nBV8HX19Q-K5GFgTUvMKU7lhdLcDApuriHOHrqpBfnxqcUFicmpeakl8V4BwWYmpgaGRmaOxkQoAQBp8iMy</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>SYSTEM FOR HIERARCHIZING CACHE MEMORY</title><source>esp@cenet</source><creator>OMORI YUZO</creator><creatorcontrib>OMORI YUZO</creatorcontrib><description>PURPOSE:To suppress the increase of the hardware quantity, and also, to cope with a high speed conversion of a CPU, by providing a cache memory for storing an operand, and a cache memory for storing (n) pieces of continuous instructions. CONSTITUTION:In a first cache memory 11, an operand is stored, and in a second cache memory 12, (n) pieces (n is a positive integer of >=2) of continuous instructions can be stored. In this state, in case of prefetching a precedence instruction in parallel with the execution of an instruction by an instruction execution control part 20, the memory 12 is brought to an access by an instruction address held in a precedence instruction address register 15. Also, in case of being brought to an access through an operand address generating circuit 18, the memory 11 is brought to an access. That is, at the time of an operand access, the memory 11 whose capacity is not large is used, and at the time of an instruction access, continuous instructions are read out by using the memory 12 of a large capacity, and by executing successively its instructions, the increase of the hardware quantity is suppressed, and it is possible to cope with the high speed conversion of a CPU.</description><language>eng</language><subject>CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>1989</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=19890227&DB=EPODOC&CC=JP&NR=S6450126A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,778,883,25547,76298</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=19890227&DB=EPODOC&CC=JP&NR=S6450126A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>OMORI YUZO</creatorcontrib><title>SYSTEM FOR HIERARCHIZING CACHE MEMORY</title><description>PURPOSE:To suppress the increase of the hardware quantity, and also, to cope with a high speed conversion of a CPU, by providing a cache memory for storing an operand, and a cache memory for storing (n) pieces of continuous instructions. CONSTITUTION:In a first cache memory 11, an operand is stored, and in a second cache memory 12, (n) pieces (n is a positive integer of >=2) of continuous instructions can be stored. In this state, in case of prefetching a precedence instruction in parallel with the execution of an instruction by an instruction execution control part 20, the memory 12 is brought to an access by an instruction address held in a precedence instruction address register 15. Also, in case of being brought to an access through an operand address generating circuit 18, the memory 11 is brought to an access. That is, at the time of an operand access, the memory 11 whose capacity is not large is used, and at the time of an instruction access, continuous instructions are read out by using the memory 12 of a large capacity, and by executing successively its instructions, the increase of the hardware quantity is suppressed, and it is possible to cope with the high speed conversion of a CPU.</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>1989</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZFANjgwOcfVVcPMPUvDwdA1yDHL28Izy9HNXcHZ09nBV8HX19Q-K5GFgTUvMKU7lhdLcDApuriHOHrqpBfnxqcUFicmpeakl8V4BwWYmpgaGRmaOxkQoAQBp8iMy</recordid><startdate>19890227</startdate><enddate>19890227</enddate><creator>OMORI YUZO</creator><scope>EVB</scope></search><sort><creationdate>19890227</creationdate><title>SYSTEM FOR HIERARCHIZING CACHE MEMORY</title><author>OMORI YUZO</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_JPS6450126A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>1989</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>OMORI YUZO</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>OMORI YUZO</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>SYSTEM FOR HIERARCHIZING CACHE MEMORY</title><date>1989-02-27</date><risdate>1989</risdate><abstract>PURPOSE:To suppress the increase of the hardware quantity, and also, to cope with a high speed conversion of a CPU, by providing a cache memory for storing an operand, and a cache memory for storing (n) pieces of continuous instructions. CONSTITUTION:In a first cache memory 11, an operand is stored, and in a second cache memory 12, (n) pieces (n is a positive integer of >=2) of continuous instructions can be stored. In this state, in case of prefetching a precedence instruction in parallel with the execution of an instruction by an instruction execution control part 20, the memory 12 is brought to an access by an instruction address held in a precedence instruction address register 15. Also, in case of being brought to an access through an operand address generating circuit 18, the memory 11 is brought to an access. That is, at the time of an operand access, the memory 11 whose capacity is not large is used, and at the time of an instruction access, continuous instructions are read out by using the memory 12 of a large capacity, and by executing successively its instructions, the increase of the hardware quantity is suppressed, and it is possible to cope with the high speed conversion of a CPU.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_JPS6450126A |
source | esp@cenet |
subjects | CALCULATING COMPUTING COUNTING ELECTRIC DIGITAL DATA PROCESSING PHYSICS |
title | SYSTEM FOR HIERARCHIZING CACHE MEMORY |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T16%3A28%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=OMORI%20YUZO&rft.date=1989-02-27&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EJPS6450126A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |