REDUCING TRAINING TIMES OF DEEP NEURAL NETWORKS THROUGH EFFICIENT HYBRID PARALLELISM

Presented are systems and methods to automatically find efficient parallelization strategies for deep neural networks (DNNs). A computation graph comprising an efficiently ordered sequence of vertices aids in computing the best parallelizing strategy in a relatively short time. Effectiveness of the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: ELANGO, Venmugil
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator ELANGO, Venmugil
description Presented are systems and methods to automatically find efficient parallelization strategies for deep neural networks (DNNs). A computation graph comprising an efficiently ordered sequence of vertices aids in computing the best parallelizing strategy in a relatively short time. Effectiveness of the parallelization strategies is evaluated on various DNNs, and the performance of the strategies proposed by various embodiments is compared against data parallelism, expert-designed strategies, and other state-of-the-art approaches. Experimental results demonstrate that the proposed strategies outperform a baseline data parallelism strategy and achieve better performance than expert-designed strategies and state-of-the-art approaches.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2021133591A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2021133591A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2021133591A13</originalsourceid><addsrcrecordid>eNrjZAgJcnUJdfb0c1cICXL09AMzPH1dgxX83RRcXF0DFPxcQ4McfYBUSLh_kHewQohHkH-ou4eCq5ubp7Onq1-IgkekU5Cni0KAI1Cdj6uPZ7AvDwNrWmJOcSovlOZmUHZzDXH20E0tyI9PLS5ITE7NSy2JDw02MjAyNDQ2NrU0dDQ0Jk4VAP-NMTE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>REDUCING TRAINING TIMES OF DEEP NEURAL NETWORKS THROUGH EFFICIENT HYBRID PARALLELISM</title><source>esp@cenet</source><creator>ELANGO, Venmugil</creator><creatorcontrib>ELANGO, Venmugil</creatorcontrib><description>Presented are systems and methods to automatically find efficient parallelization strategies for deep neural networks (DNNs). A computation graph comprising an efficiently ordered sequence of vertices aids in computing the best parallelizing strategy in a relatively short time. Effectiveness of the parallelization strategies is evaluated on various DNNs, and the performance of the strategies proposed by various embodiments is compared against data parallelism, expert-designed strategies, and other state-of-the-art approaches. Experimental results demonstrate that the proposed strategies outperform a baseline data parallelism strategy and achieve better performance than expert-designed strategies and state-of-the-art approaches.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20210506&amp;DB=EPODOC&amp;CC=US&amp;NR=2021133591A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,309,781,886,25566,76549</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20210506&amp;DB=EPODOC&amp;CC=US&amp;NR=2021133591A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ELANGO, Venmugil</creatorcontrib><title>REDUCING TRAINING TIMES OF DEEP NEURAL NETWORKS THROUGH EFFICIENT HYBRID PARALLELISM</title><description>Presented are systems and methods to automatically find efficient parallelization strategies for deep neural networks (DNNs). A computation graph comprising an efficiently ordered sequence of vertices aids in computing the best parallelizing strategy in a relatively short time. Effectiveness of the parallelization strategies is evaluated on various DNNs, and the performance of the strategies proposed by various embodiments is compared against data parallelism, expert-designed strategies, and other state-of-the-art approaches. Experimental results demonstrate that the proposed strategies outperform a baseline data parallelism strategy and achieve better performance than expert-designed strategies and state-of-the-art approaches.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZAgJcnUJdfb0c1cICXL09AMzPH1dgxX83RRcXF0DFPxcQ4McfYBUSLh_kHewQohHkH-ou4eCq5ubp7Onq1-IgkekU5Cni0KAI1Cdj6uPZ7AvDwNrWmJOcSovlOZmUHZzDXH20E0tyI9PLS5ITE7NSy2JDw02MjAyNDQ2NrU0dDQ0Jk4VAP-NMTE</recordid><startdate>20210506</startdate><enddate>20210506</enddate><creator>ELANGO, Venmugil</creator><scope>EVB</scope></search><sort><creationdate>20210506</creationdate><title>REDUCING TRAINING TIMES OF DEEP NEURAL NETWORKS THROUGH EFFICIENT HYBRID PARALLELISM</title><author>ELANGO, Venmugil</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2021133591A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2021</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>ELANGO, Venmugil</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ELANGO, Venmugil</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>REDUCING TRAINING TIMES OF DEEP NEURAL NETWORKS THROUGH EFFICIENT HYBRID PARALLELISM</title><date>2021-05-06</date><risdate>2021</risdate><abstract>Presented are systems and methods to automatically find efficient parallelization strategies for deep neural networks (DNNs). A computation graph comprising an efficiently ordered sequence of vertices aids in computing the best parallelizing strategy in a relatively short time. Effectiveness of the parallelization strategies is evaluated on various DNNs, and the performance of the strategies proposed by various embodiments is compared against data parallelism, expert-designed strategies, and other state-of-the-art approaches. Experimental results demonstrate that the proposed strategies outperform a baseline data parallelism strategy and achieve better performance than expert-designed strategies and state-of-the-art approaches.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2021133591A1
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title REDUCING TRAINING TIMES OF DEEP NEURAL NETWORKS THROUGH EFFICIENT HYBRID PARALLELISM
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T18%3A13%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ELANGO,%20Venmugil&rft.date=2021-05-06&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2021133591A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true