Synchronization scheduler of distributed neural network training
Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Procter, Adam Karkada, Deepthi Arunachalam, Meenakshi Saletore, Vikram |
description | Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of the training due to synchronization of one or more synchronizing parameters of the layers. Based upon the first timing measurement, the technology is to determine whether to modify a synchronization barrier policy to include a synchronization barrier to impede synchronization of one or more synchronizing parameters of one of the layers during a second window of the training. The technology is further to impede the synchronization of the one or more synchronizing parameters of the one of the layers during the second window if the synchronization barrier policy is modified to include the synchronization barrier. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US10922610B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US10922610B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US10922610B23</originalsourceid><addsrcrecordid>eNrjZHAIrsxLzijKz8usSizJzM9TKE7OSE0pzUktUshPU0jJLC4pykwqLUlNUchLLS1KzAFSJeX5RdkKJUWJmXmZeek8DKxpiTnFqbxQmptB0c01xNlDN7UgPz61uCAxORWoJT402NDA0sjIzNDAyciYGDUAVgMyew</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Synchronization scheduler of distributed neural network training</title><source>esp@cenet</source><creator>Procter, Adam ; Karkada, Deepthi ; Arunachalam, Meenakshi ; Saletore, Vikram</creator><creatorcontrib>Procter, Adam ; Karkada, Deepthi ; Arunachalam, Meenakshi ; Saletore, Vikram</creatorcontrib><description>Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of the training due to synchronization of one or more synchronizing parameters of the layers. Based upon the first timing measurement, the technology is to determine whether to modify a synchronization barrier policy to include a synchronization barrier to impede synchronization of one or more synchronizing parameters of one of the layers during a second window of the training. The technology is further to impede the synchronization of the one or more synchronizing parameters of the one of the layers during the second window if the synchronization barrier policy is modified to include the synchronization barrier.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20210216&DB=EPODOC&CC=US&NR=10922610B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,777,882,25545,76296</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20210216&DB=EPODOC&CC=US&NR=10922610B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Procter, Adam</creatorcontrib><creatorcontrib>Karkada, Deepthi</creatorcontrib><creatorcontrib>Arunachalam, Meenakshi</creatorcontrib><creatorcontrib>Saletore, Vikram</creatorcontrib><title>Synchronization scheduler of distributed neural network training</title><description>Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of the training due to synchronization of one or more synchronizing parameters of the layers. Based upon the first timing measurement, the technology is to determine whether to modify a synchronization barrier policy to include a synchronization barrier to impede synchronization of one or more synchronizing parameters of one of the layers during a second window of the training. The technology is further to impede the synchronization of the one or more synchronizing parameters of the one of the layers during the second window if the synchronization barrier policy is modified to include the synchronization barrier.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHAIrsxLzijKz8usSizJzM9TKE7OSE0pzUktUshPU0jJLC4pykwqLUlNUchLLS1KzAFSJeX5RdkKJUWJmXmZeek8DKxpiTnFqbxQmptB0c01xNlDN7UgPz61uCAxORWoJT402NDA0sjIzNDAyciYGDUAVgMyew</recordid><startdate>20210216</startdate><enddate>20210216</enddate><creator>Procter, Adam</creator><creator>Karkada, Deepthi</creator><creator>Arunachalam, Meenakshi</creator><creator>Saletore, Vikram</creator><scope>EVB</scope></search><sort><creationdate>20210216</creationdate><title>Synchronization scheduler of distributed neural network training</title><author>Procter, Adam ; Karkada, Deepthi ; Arunachalam, Meenakshi ; Saletore, Vikram</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US10922610B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2021</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>Procter, Adam</creatorcontrib><creatorcontrib>Karkada, Deepthi</creatorcontrib><creatorcontrib>Arunachalam, Meenakshi</creatorcontrib><creatorcontrib>Saletore, Vikram</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Procter, Adam</au><au>Karkada, Deepthi</au><au>Arunachalam, Meenakshi</au><au>Saletore, Vikram</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Synchronization scheduler of distributed neural network training</title><date>2021-02-16</date><risdate>2021</risdate><abstract>Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of the training due to synchronization of one or more synchronizing parameters of the layers. Based upon the first timing measurement, the technology is to determine whether to modify a synchronization barrier policy to include a synchronization barrier to impede synchronization of one or more synchronizing parameters of one of the layers during a second window of the training. The technology is further to impede the synchronization of the one or more synchronizing parameters of the one of the layers during the second window if the synchronization barrier policy is modified to include the synchronization barrier.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_US10922610B2 |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING PHYSICS |
title | Synchronization scheduler of distributed neural network training |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T16%3A38%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Procter,%20Adam&rft.date=2021-02-16&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS10922610B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |