ENERGY-EFFICIENT DEEP NEURAL NETWORK TRAINING ON DISTRIBUTED SPLIT ATTRIBUTES
A method of operating a master node in a vertical federated learning, vFL, system including a plurality of workers for training a split neural network includes receiving layer outputs for a sample period from one or more of the workers for a cut-layer at which the neural network is split between the...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | ICKIN, Selim VANDIKAS, Konstantinos |
description | A method of operating a master node in a vertical federated learning, vFL, system including a plurality of workers for training a split neural network includes receiving layer outputs for a sample period from one or more of the workers for a cut-layer at which the neural network is split between the workers and the master node, and determining whether layer outputs for the cut-layer were not received from one of the workers. In response to determining that layer outputs for the cut-layer were not received from one of the workers, the method includes generating imputed values of the layer outputs that were not received, calculating gradients for neurons in the cut-layer based on the received layer outputs and the imputed layer outputs, splitting the gradients into groups associated with respective ones of the workers, and transmitting the groups of gradients to respective ones of the workers. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2024119305A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2024119305A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2024119305A13</originalsourceid><addsrcrecordid>eNrjZPB19XMNco_UdXVz83T2dPULUXBxdQ1Q8HMNDXL0AVIh4f5B3gohQY6efp5-7gr-fgounsEhQZ5OoSGuLgrBAT6eIQqOIVCBYB4G1rTEnOJUXijNzaDs5hri7KGbWpAfn1pckJicmpdaEh8abGRgZGJoaGlsYOpoaEycKgDJjC-A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>ENERGY-EFFICIENT DEEP NEURAL NETWORK TRAINING ON DISTRIBUTED SPLIT ATTRIBUTES</title><source>esp@cenet</source><creator>ICKIN, Selim ; VANDIKAS, Konstantinos</creator><creatorcontrib>ICKIN, Selim ; VANDIKAS, Konstantinos</creatorcontrib><description>A method of operating a master node in a vertical federated learning, vFL, system including a plurality of workers for training a split neural network includes receiving layer outputs for a sample period from one or more of the workers for a cut-layer at which the neural network is split between the workers and the master node, and determining whether layer outputs for the cut-layer were not received from one of the workers. In response to determining that layer outputs for the cut-layer were not received from one of the workers, the method includes generating imputed values of the layer outputs that were not received, calculating gradients for neurons in the cut-layer based on the received layer outputs and the imputed layer outputs, splitting the gradients into groups associated with respective ones of the workers, and transmitting the groups of gradients to respective ones of the workers.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240411&DB=EPODOC&CC=US&NR=2024119305A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76516</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240411&DB=EPODOC&CC=US&NR=2024119305A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ICKIN, Selim</creatorcontrib><creatorcontrib>VANDIKAS, Konstantinos</creatorcontrib><title>ENERGY-EFFICIENT DEEP NEURAL NETWORK TRAINING ON DISTRIBUTED SPLIT ATTRIBUTES</title><description>A method of operating a master node in a vertical federated learning, vFL, system including a plurality of workers for training a split neural network includes receiving layer outputs for a sample period from one or more of the workers for a cut-layer at which the neural network is split between the workers and the master node, and determining whether layer outputs for the cut-layer were not received from one of the workers. In response to determining that layer outputs for the cut-layer were not received from one of the workers, the method includes generating imputed values of the layer outputs that were not received, calculating gradients for neurons in the cut-layer based on the received layer outputs and the imputed layer outputs, splitting the gradients into groups associated with respective ones of the workers, and transmitting the groups of gradients to respective ones of the workers.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZPB19XMNco_UdXVz83T2dPULUXBxdQ1Q8HMNDXL0AVIh4f5B3gohQY6efp5-7gr-fgounsEhQZ5OoSGuLgrBAT6eIQqOIVCBYB4G1rTEnOJUXijNzaDs5hri7KGbWpAfn1pckJicmpdaEh8abGRgZGJoaGlsYOpoaEycKgDJjC-A</recordid><startdate>20240411</startdate><enddate>20240411</enddate><creator>ICKIN, Selim</creator><creator>VANDIKAS, Konstantinos</creator><scope>EVB</scope></search><sort><creationdate>20240411</creationdate><title>ENERGY-EFFICIENT DEEP NEURAL NETWORK TRAINING ON DISTRIBUTED SPLIT ATTRIBUTES</title><author>ICKIN, Selim ; VANDIKAS, Konstantinos</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2024119305A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>ICKIN, Selim</creatorcontrib><creatorcontrib>VANDIKAS, Konstantinos</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ICKIN, Selim</au><au>VANDIKAS, Konstantinos</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>ENERGY-EFFICIENT DEEP NEURAL NETWORK TRAINING ON DISTRIBUTED SPLIT ATTRIBUTES</title><date>2024-04-11</date><risdate>2024</risdate><abstract>A method of operating a master node in a vertical federated learning, vFL, system including a plurality of workers for training a split neural network includes receiving layer outputs for a sample period from one or more of the workers for a cut-layer at which the neural network is split between the workers and the master node, and determining whether layer outputs for the cut-layer were not received from one of the workers. In response to determining that layer outputs for the cut-layer were not received from one of the workers, the method includes generating imputed values of the layer outputs that were not received, calculating gradients for neurons in the cut-layer based on the received layer outputs and the imputed layer outputs, splitting the gradients into groups associated with respective ones of the workers, and transmitting the groups of gradients to respective ones of the workers.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_US2024119305A1 |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING PHYSICS |
title | ENERGY-EFFICIENT DEEP NEURAL NETWORK TRAINING ON DISTRIBUTED SPLIT ATTRIBUTES |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T21%3A35%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ICKIN,%20Selim&rft.date=2024-04-11&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2024119305A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |