Stack implementation method and device based on Cache technology and storage medium
The invention discloses a stack implementation method and device based on the Cache technology and a storage medium, the method comprises the steps that a compiler allocates stack space for GPU Shader threads in DDR, and the problem that the DDR is low in speed is solved by adding the Cache special...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | RONG YAOCHENG JIE XIANG LIU YONGGANG LI GANG |
description | The invention discloses a stack implementation method and device based on the Cache technology and a storage medium, the method comprises the steps that a compiler allocates stack space for GPU Shader threads in DDR, and the problem that the DDR is low in speed is solved by adding the Cache special for the stack between the DDR and an MVP micro-architecture assembly line, the method can obtain balance between flexibility and performance, the MVP micro-architecture design is optimized, and the method is suitable for large-scale popularization and application. Besides, on the basis of the MVP micro-architecture design, the Cache caches the stack spaces of the four concurrent threads at the same time, under the condition that the stack space of the GPU threads becomes large, the stack Cache can still effectively cache according to the spatial locality principle, and therefore the limitation of the area and power consumption of a GPU chip is avoided.
本发明公开了一种基于Cache技术的栈实现方法、装置及存储介质,方法包括编译器在DDR中为GPU Shader线程分配栈空间, |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN115619619A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN115619619A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN115619619A3</originalsourceid><addsrcrecordid>eNqNizsKAjEURdNYiLqH5wIsgihYSlCsbMZ-eCbXSXDywTwFd28QFyBcOMU5d6q6TtjeKcQyIiIJS8iJIsRnR5wcObyCBV25wlFThq0HCaxPeczD-xtVyQ8e0H4uPONcTW48Vix-nKnl8XAxpxVK7lELWyRIb85ab7Z617Zf_9N8AD77OD0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Stack implementation method and device based on Cache technology and storage medium</title><source>esp@cenet</source><creator>RONG YAOCHENG ; JIE XIANG ; LIU YONGGANG ; LI GANG</creator><creatorcontrib>RONG YAOCHENG ; JIE XIANG ; LIU YONGGANG ; LI GANG</creatorcontrib><description>The invention discloses a stack implementation method and device based on the Cache technology and a storage medium, the method comprises the steps that a compiler allocates stack space for GPU Shader threads in DDR, and the problem that the DDR is low in speed is solved by adding the Cache special for the stack between the DDR and an MVP micro-architecture assembly line, the method can obtain balance between flexibility and performance, the MVP micro-architecture design is optimized, and the method is suitable for large-scale popularization and application. Besides, on the basis of the MVP micro-architecture design, the Cache caches the stack spaces of the four concurrent threads at the same time, under the condition that the stack space of the GPU threads becomes large, the stack Cache can still effectively cache according to the spatial locality principle, and therefore the limitation of the area and power consumption of a GPU chip is avoided.
本发明公开了一种基于Cache技术的栈实现方法、装置及存储介质,方法包括编译器在DDR中为GPU Shader线程分配栈空间,</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230117&DB=EPODOC&CC=CN&NR=115619619A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,778,883,25551,76302</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230117&DB=EPODOC&CC=CN&NR=115619619A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>RONG YAOCHENG</creatorcontrib><creatorcontrib>JIE XIANG</creatorcontrib><creatorcontrib>LIU YONGGANG</creatorcontrib><creatorcontrib>LI GANG</creatorcontrib><title>Stack implementation method and device based on Cache technology and storage medium</title><description>The invention discloses a stack implementation method and device based on the Cache technology and a storage medium, the method comprises the steps that a compiler allocates stack space for GPU Shader threads in DDR, and the problem that the DDR is low in speed is solved by adding the Cache special for the stack between the DDR and an MVP micro-architecture assembly line, the method can obtain balance between flexibility and performance, the MVP micro-architecture design is optimized, and the method is suitable for large-scale popularization and application. Besides, on the basis of the MVP micro-architecture design, the Cache caches the stack spaces of the four concurrent threads at the same time, under the condition that the stack space of the GPU threads becomes large, the stack Cache can still effectively cache according to the spatial locality principle, and therefore the limitation of the area and power consumption of a GPU chip is avoided.
本发明公开了一种基于Cache技术的栈实现方法、装置及存储介质,方法包括编译器在DDR中为GPU Shader线程分配栈空间,</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNizsKAjEURdNYiLqH5wIsgihYSlCsbMZ-eCbXSXDywTwFd28QFyBcOMU5d6q6TtjeKcQyIiIJS8iJIsRnR5wcObyCBV25wlFThq0HCaxPeczD-xtVyQ8e0H4uPONcTW48Vix-nKnl8XAxpxVK7lELWyRIb85ab7Z617Zf_9N8AD77OD0</recordid><startdate>20230117</startdate><enddate>20230117</enddate><creator>RONG YAOCHENG</creator><creator>JIE XIANG</creator><creator>LIU YONGGANG</creator><creator>LI GANG</creator><scope>EVB</scope></search><sort><creationdate>20230117</creationdate><title>Stack implementation method and device based on Cache technology and storage medium</title><author>RONG YAOCHENG ; JIE XIANG ; LIU YONGGANG ; LI GANG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN115619619A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>RONG YAOCHENG</creatorcontrib><creatorcontrib>JIE XIANG</creatorcontrib><creatorcontrib>LIU YONGGANG</creatorcontrib><creatorcontrib>LI GANG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>RONG YAOCHENG</au><au>JIE XIANG</au><au>LIU YONGGANG</au><au>LI GANG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Stack implementation method and device based on Cache technology and storage medium</title><date>2023-01-17</date><risdate>2023</risdate><abstract>The invention discloses a stack implementation method and device based on the Cache technology and a storage medium, the method comprises the steps that a compiler allocates stack space for GPU Shader threads in DDR, and the problem that the DDR is low in speed is solved by adding the Cache special for the stack between the DDR and an MVP micro-architecture assembly line, the method can obtain balance between flexibility and performance, the MVP micro-architecture design is optimized, and the method is suitable for large-scale popularization and application. Besides, on the basis of the MVP micro-architecture design, the Cache caches the stack spaces of the four concurrent threads at the same time, under the condition that the stack space of the GPU threads becomes large, the stack Cache can still effectively cache according to the spatial locality principle, and therefore the limitation of the area and power consumption of a GPU chip is avoided.
本发明公开了一种基于Cache技术的栈实现方法、装置及存储介质,方法包括编译器在DDR中为GPU Shader线程分配栈空间,</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | chi ; eng |
recordid | cdi_epo_espacenet_CN115619619A |
source | esp@cenet |
subjects | CALCULATING COMPUTING COUNTING IMAGE DATA PROCESSING OR GENERATION, IN GENERAL PHYSICS |
title | Stack implementation method and device based on Cache technology and storage medium |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T02%3A00%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=RONG%20YAOCHENG&rft.date=2023-01-17&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN115619619A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |