Distributed learning information compression method based on shared random projection

The invention discloses a distributed learning information compression method based on shared random projection, and belongs to the technical field of machine learning and neural network optimization. According to the method, a Gaussian random vector shared among distributed system equipment is used...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: FANG CONG, CHO HAN-JIN, XIE XINGYU, LIN ZHONGCHEN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator FANG CONG
CHO HAN-JIN
XIE XINGYU
LIN ZHONGCHEN
description The invention discloses a distributed learning information compression method based on shared random projection, and belongs to the technical field of machine learning and neural network optimization. According to the method, a Gaussian random vector shared among distributed system equipment is used for carrying out projection compression and reconstruction on gradient information, the gradient vector is compressed to any low-dimensional space before communication, and then reconstruction is carried out after communication. The method can be directly deployed in any existing first-order optimizer, communication architecture and training framework, and can ensure the unbiasedness of the reconstruction vector and the boundaries of the variance, thereby keeping the high quality of training in distributed learning. Compared with an existing gradient compression method, the method has higher universality, higher training speed, better training effect and a simpler deployment mode. 本发明公开一种基于共享随机投影的分布式学习信息压缩方法,属于机器学
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN118690804A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN118690804A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN118690804A3</originalsourceid><addsrcrecordid>eNqNi0EKwjAQRbtxIeod4gGEFkXqUqriypWuyzSZtpFmJszE-5uCB3D134f3lsXr4jWJ7z4JnZkQhDwNxlPPEiB5JmM5REHVmQOmkZ3pQLOdv44gmQTIcTBR-I12jtbFoodJcfPbVbG9XZ_NfYeRW9QIFglT2zyqqj6eyro8nPf_OF9eEjoj</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Distributed learning information compression method based on shared random projection</title><source>esp@cenet</source><creator>FANG CONG ; CHO HAN-JIN ; XIE XINGYU ; LIN ZHONGCHEN</creator><creatorcontrib>FANG CONG ; CHO HAN-JIN ; XIE XINGYU ; LIN ZHONGCHEN</creatorcontrib><description>The invention discloses a distributed learning information compression method based on shared random projection, and belongs to the technical field of machine learning and neural network optimization. According to the method, a Gaussian random vector shared among distributed system equipment is used for carrying out projection compression and reconstruction on gradient information, the gradient vector is compressed to any low-dimensional space before communication, and then reconstruction is carried out after communication. The method can be directly deployed in any existing first-order optimizer, communication architecture and training framework, and can ensure the unbiasedness of the reconstruction vector and the boundaries of the variance, thereby keeping the high quality of training in distributed learning. Compared with an existing gradient compression method, the method has higher universality, higher training speed, better training effect and a simpler deployment mode. 本发明公开一种基于共享随机投影的分布式学习信息压缩方法,属于机器学</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240924&amp;DB=EPODOC&amp;CC=CN&amp;NR=118690804A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240924&amp;DB=EPODOC&amp;CC=CN&amp;NR=118690804A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>FANG CONG</creatorcontrib><creatorcontrib>CHO HAN-JIN</creatorcontrib><creatorcontrib>XIE XINGYU</creatorcontrib><creatorcontrib>LIN ZHONGCHEN</creatorcontrib><title>Distributed learning information compression method based on shared random projection</title><description>The invention discloses a distributed learning information compression method based on shared random projection, and belongs to the technical field of machine learning and neural network optimization. According to the method, a Gaussian random vector shared among distributed system equipment is used for carrying out projection compression and reconstruction on gradient information, the gradient vector is compressed to any low-dimensional space before communication, and then reconstruction is carried out after communication. The method can be directly deployed in any existing first-order optimizer, communication architecture and training framework, and can ensure the unbiasedness of the reconstruction vector and the boundaries of the variance, thereby keeping the high quality of training in distributed learning. Compared with an existing gradient compression method, the method has higher universality, higher training speed, better training effect and a simpler deployment mode. 本发明公开一种基于共享随机投影的分布式学习信息压缩方法,属于机器学</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNi0EKwjAQRbtxIeod4gGEFkXqUqriypWuyzSZtpFmJszE-5uCB3D134f3lsXr4jWJ7z4JnZkQhDwNxlPPEiB5JmM5REHVmQOmkZ3pQLOdv44gmQTIcTBR-I12jtbFoodJcfPbVbG9XZ_NfYeRW9QIFglT2zyqqj6eyro8nPf_OF9eEjoj</recordid><startdate>20240924</startdate><enddate>20240924</enddate><creator>FANG CONG</creator><creator>CHO HAN-JIN</creator><creator>XIE XINGYU</creator><creator>LIN ZHONGCHEN</creator><scope>EVB</scope></search><sort><creationdate>20240924</creationdate><title>Distributed learning information compression method based on shared random projection</title><author>FANG CONG ; CHO HAN-JIN ; XIE XINGYU ; LIN ZHONGCHEN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN118690804A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>FANG CONG</creatorcontrib><creatorcontrib>CHO HAN-JIN</creatorcontrib><creatorcontrib>XIE XINGYU</creatorcontrib><creatorcontrib>LIN ZHONGCHEN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>FANG CONG</au><au>CHO HAN-JIN</au><au>XIE XINGYU</au><au>LIN ZHONGCHEN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Distributed learning information compression method based on shared random projection</title><date>2024-09-24</date><risdate>2024</risdate><abstract>The invention discloses a distributed learning information compression method based on shared random projection, and belongs to the technical field of machine learning and neural network optimization. According to the method, a Gaussian random vector shared among distributed system equipment is used for carrying out projection compression and reconstruction on gradient information, the gradient vector is compressed to any low-dimensional space before communication, and then reconstruction is carried out after communication. The method can be directly deployed in any existing first-order optimizer, communication architecture and training framework, and can ensure the unbiasedness of the reconstruction vector and the boundaries of the variance, thereby keeping the high quality of training in distributed learning. Compared with an existing gradient compression method, the method has higher universality, higher training speed, better training effect and a simpler deployment mode. 本发明公开一种基于共享随机投影的分布式学习信息压缩方法,属于机器学</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN118690804A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title Distributed learning information compression method based on shared random projection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T19%3A58%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=FANG%20CONG&rft.date=2024-09-24&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN118690804A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true