FEDERATED LEARNING METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE, AND STORAGE MEDIUM

The present disclosure provides a federated learning method, apparatus and system, an electronic device, and a storage medium, which relate to a field of an artificial intelligence technology, in particular to fields of computer vision and deep learning technologies. A specific implementation soluti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: LIU, Ji, ZHANG, Hong, DOU, Dejing, JIA, Juncheng, PENG, Shengbo, ZHOU, Jiwen, ZHOU, Ruipu
Format: Patent
Sprache:eng ; fre ; ger
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator LIU, Ji
ZHANG, Hong
DOU, Dejing
JIA, Juncheng
PENG, Shengbo
ZHOU, Jiwen
ZHOU, Ruipu
description The present disclosure provides a federated learning method, apparatus and system, an electronic device, and a storage medium, which relate to a field of an artificial intelligence technology, in particular to fields of computer vision and deep learning technologies. A specific implementation solution includes: performing a plurality of rounds of training until a training end condition is met, so as to obtain a trained global model; and publishing the trained global model to a plurality of devices. Each round of training in the plurality of rounds of training includes: transmitting a current global model to at least some devices in the plurality of devices; receiving trained parameters for the current global model from the at least some devices; performing an aggregation on the received parameters to obtain a current aggregation model; and adjusting the current aggregation model based on a globally shared dataset, and updating the adjusted aggregation model as a new current global model for a next round of training.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_EP4113394A3</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EP4113394A3</sourcerecordid><originalsourceid>FETCH-epo_espacenet_EP4113394A33</originalsourceid><addsrcrecordid>eNqNi7EOgjAURbs4GPUf3gfgQMri-NJeoAmUpjxInAgxdTJKgv8fSfQDnM5wztmrsYRFZIGlBhy98xW1kLqzGXEIvKmhJ_aW-msvaDNCAyOx886QxegMsq-WLnKFbbZuaI9qd58fazr9eFBUQkx9TstrSusy39IzvSeEIs-1vhSs9R_JB_2OL94</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>FEDERATED LEARNING METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE, AND STORAGE MEDIUM</title><source>esp@cenet</source><creator>LIU, Ji ; ZHANG, Hong ; DOU, Dejing ; JIA, Juncheng ; PENG, Shengbo ; ZHOU, Jiwen ; ZHOU, Ruipu</creator><creatorcontrib>LIU, Ji ; ZHANG, Hong ; DOU, Dejing ; JIA, Juncheng ; PENG, Shengbo ; ZHOU, Jiwen ; ZHOU, Ruipu</creatorcontrib><description>The present disclosure provides a federated learning method, apparatus and system, an electronic device, and a storage medium, which relate to a field of an artificial intelligence technology, in particular to fields of computer vision and deep learning technologies. A specific implementation solution includes: performing a plurality of rounds of training until a training end condition is met, so as to obtain a trained global model; and publishing the trained global model to a plurality of devices. Each round of training in the plurality of rounds of training includes: transmitting a current global model to at least some devices in the plurality of devices; receiving trained parameters for the current global model from the at least some devices; performing an aggregation on the received parameters to obtain a current aggregation model; and adjusting the current aggregation model based on a globally shared dataset, and updating the adjusted aggregation model as a new current global model for a next round of training.</description><language>eng ; fre ; ger</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230531&amp;DB=EPODOC&amp;CC=EP&amp;NR=4113394A3$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76293</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230531&amp;DB=EPODOC&amp;CC=EP&amp;NR=4113394A3$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>LIU, Ji</creatorcontrib><creatorcontrib>ZHANG, Hong</creatorcontrib><creatorcontrib>DOU, Dejing</creatorcontrib><creatorcontrib>JIA, Juncheng</creatorcontrib><creatorcontrib>PENG, Shengbo</creatorcontrib><creatorcontrib>ZHOU, Jiwen</creatorcontrib><creatorcontrib>ZHOU, Ruipu</creatorcontrib><title>FEDERATED LEARNING METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE, AND STORAGE MEDIUM</title><description>The present disclosure provides a federated learning method, apparatus and system, an electronic device, and a storage medium, which relate to a field of an artificial intelligence technology, in particular to fields of computer vision and deep learning technologies. A specific implementation solution includes: performing a plurality of rounds of training until a training end condition is met, so as to obtain a trained global model; and publishing the trained global model to a plurality of devices. Each round of training in the plurality of rounds of training includes: transmitting a current global model to at least some devices in the plurality of devices; receiving trained parameters for the current global model from the at least some devices; performing an aggregation on the received parameters to obtain a current aggregation model; and adjusting the current aggregation model based on a globally shared dataset, and updating the adjusted aggregation model as a new current global model for a next round of training.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNi7EOgjAURbs4GPUf3gfgQMri-NJeoAmUpjxInAgxdTJKgv8fSfQDnM5wztmrsYRFZIGlBhy98xW1kLqzGXEIvKmhJ_aW-msvaDNCAyOx886QxegMsq-WLnKFbbZuaI9qd58fazr9eFBUQkx9TstrSusy39IzvSeEIs-1vhSs9R_JB_2OL94</recordid><startdate>20230531</startdate><enddate>20230531</enddate><creator>LIU, Ji</creator><creator>ZHANG, Hong</creator><creator>DOU, Dejing</creator><creator>JIA, Juncheng</creator><creator>PENG, Shengbo</creator><creator>ZHOU, Jiwen</creator><creator>ZHOU, Ruipu</creator><scope>EVB</scope></search><sort><creationdate>20230531</creationdate><title>FEDERATED LEARNING METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE, AND STORAGE MEDIUM</title><author>LIU, Ji ; ZHANG, Hong ; DOU, Dejing ; JIA, Juncheng ; PENG, Shengbo ; ZHOU, Jiwen ; ZHOU, Ruipu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_EP4113394A33</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; fre ; ger</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>LIU, Ji</creatorcontrib><creatorcontrib>ZHANG, Hong</creatorcontrib><creatorcontrib>DOU, Dejing</creatorcontrib><creatorcontrib>JIA, Juncheng</creatorcontrib><creatorcontrib>PENG, Shengbo</creatorcontrib><creatorcontrib>ZHOU, Jiwen</creatorcontrib><creatorcontrib>ZHOU, Ruipu</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>LIU, Ji</au><au>ZHANG, Hong</au><au>DOU, Dejing</au><au>JIA, Juncheng</au><au>PENG, Shengbo</au><au>ZHOU, Jiwen</au><au>ZHOU, Ruipu</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>FEDERATED LEARNING METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE, AND STORAGE MEDIUM</title><date>2023-05-31</date><risdate>2023</risdate><abstract>The present disclosure provides a federated learning method, apparatus and system, an electronic device, and a storage medium, which relate to a field of an artificial intelligence technology, in particular to fields of computer vision and deep learning technologies. A specific implementation solution includes: performing a plurality of rounds of training until a training end condition is met, so as to obtain a trained global model; and publishing the trained global model to a plurality of devices. Each round of training in the plurality of rounds of training includes: transmitting a current global model to at least some devices in the plurality of devices; receiving trained parameters for the current global model from the at least some devices; performing an aggregation on the received parameters to obtain a current aggregation model; and adjusting the current aggregation model based on a globally shared dataset, and updating the adjusted aggregation model as a new current global model for a next round of training.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng ; fre ; ger
recordid cdi_epo_espacenet_EP4113394A3
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title FEDERATED LEARNING METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE, AND STORAGE MEDIUM
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T06%3A12%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=LIU,%20Ji&rft.date=2023-05-31&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EEP4113394A3%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true