Distributed AI training topology based on flexible cable connection
A data processing system includes a central processing unit (CPU) and accelerator cards coupled to the CPU over a bus, each of the accelerator cards having a plurality of data processing (DP) accelerators to receive DP tasks from the CPU and to perform the received DP tasks. At least two of the acce...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Ouyang, Jian Zhu, Hefei Chen, Qingshu Gong, Xiaozhang Zhao, Zhibiao |
description | A data processing system includes a central processing unit (CPU) and accelerator cards coupled to the CPU over a bus, each of the accelerator cards having a plurality of data processing (DP) accelerators to receive DP tasks from the CPU and to perform the received DP tasks. At least two of the accelerator cards are coupled to each other via an inter-card connection, and at least two of the DP accelerators are coupled to each other via an inter-chip connection. Each of the inter-card connection and the inter-chip connection is capable of being dynamically activated or deactivated, such that in response to a request received from the CPU, any one of the accelerator cards or any one of the DP accelerators within any one of the accelerator cards can be enabled or disabled to process any one of the DP tasks received from the CPU. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11615295B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11615295B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11615295B23</originalsourceid><addsrcrecordid>eNrjZHB2ySwuKcpMKi1JTVFw9FQoKUrMzMvMS1coyS_Iz8lPr1RISiwGSuXnKaTlpFZkJuWkKiQngsn8vLzU5JLM_DweBta0xJziVF4ozc2g6OYa4uyhm1qQH59aXJCYnJqXWhIfGmxoaGZoamRp6mRkTIwaAHaAMo4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Distributed AI training topology based on flexible cable connection</title><source>esp@cenet</source><creator>Ouyang, Jian ; Zhu, Hefei ; Chen, Qingshu ; Gong, Xiaozhang ; Zhao, Zhibiao</creator><creatorcontrib>Ouyang, Jian ; Zhu, Hefei ; Chen, Qingshu ; Gong, Xiaozhang ; Zhao, Zhibiao</creatorcontrib><description>A data processing system includes a central processing unit (CPU) and accelerator cards coupled to the CPU over a bus, each of the accelerator cards having a plurality of data processing (DP) accelerators to receive DP tasks from the CPU and to perform the received DP tasks. At least two of the accelerator cards are coupled to each other via an inter-card connection, and at least two of the DP accelerators are coupled to each other via an inter-chip connection. Each of the inter-card connection and the inter-chip connection is capable of being dynamically activated or deactivated, such that in response to a request received from the CPU, any one of the accelerator cards or any one of the DP accelerators within any one of the accelerator cards can be enabled or disabled to process any one of the DP tasks received from the CPU.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230328&DB=EPODOC&CC=US&NR=11615295B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230328&DB=EPODOC&CC=US&NR=11615295B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Ouyang, Jian</creatorcontrib><creatorcontrib>Zhu, Hefei</creatorcontrib><creatorcontrib>Chen, Qingshu</creatorcontrib><creatorcontrib>Gong, Xiaozhang</creatorcontrib><creatorcontrib>Zhao, Zhibiao</creatorcontrib><title>Distributed AI training topology based on flexible cable connection</title><description>A data processing system includes a central processing unit (CPU) and accelerator cards coupled to the CPU over a bus, each of the accelerator cards having a plurality of data processing (DP) accelerators to receive DP tasks from the CPU and to perform the received DP tasks. At least two of the accelerator cards are coupled to each other via an inter-card connection, and at least two of the DP accelerators are coupled to each other via an inter-chip connection. Each of the inter-card connection and the inter-chip connection is capable of being dynamically activated or deactivated, such that in response to a request received from the CPU, any one of the accelerator cards or any one of the DP accelerators within any one of the accelerator cards can be enabled or disabled to process any one of the DP tasks received from the CPU.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHB2ySwuKcpMKi1JTVFw9FQoKUrMzMvMS1coyS_Iz8lPr1RISiwGSuXnKaTlpFZkJuWkKiQngsn8vLzU5JLM_DweBta0xJziVF4ozc2g6OYa4uyhm1qQH59aXJCYnJqXWhIfGmxoaGZoamRp6mRkTIwaAHaAMo4</recordid><startdate>20230328</startdate><enddate>20230328</enddate><creator>Ouyang, Jian</creator><creator>Zhu, Hefei</creator><creator>Chen, Qingshu</creator><creator>Gong, Xiaozhang</creator><creator>Zhao, Zhibiao</creator><scope>EVB</scope></search><sort><creationdate>20230328</creationdate><title>Distributed AI training topology based on flexible cable connection</title><author>Ouyang, Jian ; Zhu, Hefei ; Chen, Qingshu ; Gong, Xiaozhang ; Zhao, Zhibiao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11615295B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>Ouyang, Jian</creatorcontrib><creatorcontrib>Zhu, Hefei</creatorcontrib><creatorcontrib>Chen, Qingshu</creatorcontrib><creatorcontrib>Gong, Xiaozhang</creatorcontrib><creatorcontrib>Zhao, Zhibiao</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ouyang, Jian</au><au>Zhu, Hefei</au><au>Chen, Qingshu</au><au>Gong, Xiaozhang</au><au>Zhao, Zhibiao</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Distributed AI training topology based on flexible cable connection</title><date>2023-03-28</date><risdate>2023</risdate><abstract>A data processing system includes a central processing unit (CPU) and accelerator cards coupled to the CPU over a bus, each of the accelerator cards having a plurality of data processing (DP) accelerators to receive DP tasks from the CPU and to perform the received DP tasks. At least two of the accelerator cards are coupled to each other via an inter-card connection, and at least two of the DP accelerators are coupled to each other via an inter-chip connection. Each of the inter-card connection and the inter-chip connection is capable of being dynamically activated or deactivated, such that in response to a request received from the CPU, any one of the accelerator cards or any one of the DP accelerators within any one of the accelerator cards can be enabled or disabled to process any one of the DP tasks received from the CPU.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_US11615295B2 |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING ELECTRIC DIGITAL DATA PROCESSING PHYSICS |
title | Distributed AI training topology based on flexible cable connection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T10%3A27%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Ouyang,%20Jian&rft.date=2023-03-28&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11615295B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |