Distributed multi-task classification: a decentralized online learning approach

Although dispersing one single task to distributed learning nodes has been intensively studied by the previous research, multi-task learning on distributed networks is still an area that has not been fully exploited, especially under decentralized settings. The challenge lies in the fact that differ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning 2018-04, Vol.107 (4), p.727-747
Hauptverfasser: Zhang, Chi, Zhao, Peilin, Hao, Shuji, Soh, Yeng Chai, Lee, Bu Sung, Miao, Chunyan, Hoi, Steven C. H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 747
container_issue 4
container_start_page 727
container_title Machine learning
container_volume 107
creator Zhang, Chi
Zhao, Peilin
Hao, Shuji
Soh, Yeng Chai
Lee, Bu Sung
Miao, Chunyan
Hoi, Steven C. H.
description Although dispersing one single task to distributed learning nodes has been intensively studied by the previous research, multi-task learning on distributed networks is still an area that has not been fully exploited, especially under decentralized settings. The challenge lies in the fact that different tasks may have different optimal learning weights while communication through the distributed network forces all tasks to converge to an unique classifier. In this paper, we present a novel algorithm to overcome this challenge and enable learning multiple tasks simultaneously on a decentralized distributed network. Specifically, the learning framework can be separated into two phases: (i) multi-task information is shared within each node on the first phase; (ii) communication between nodes then leads the whole network to converge to a common minimizer. Theoretical analysis indicates that our algorithm achieves a O ( T ) regret bound when compared with the best classifier in hindsight, which is further validated by experiments on both synthetic and real-world datasets.
doi_str_mv 10.1007/s10994-017-5676-y
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2015584314</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2015584314</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-8a78d92fd12522842006a2debd0aff79abb8f01fbe72968309a50cd86edc8dad3</originalsourceid><addsrcrecordid>eNp1kDtPwzAUhS0EEqXwA9giMRuundhx2BBvqVIXmC3Hj-KSOsV2hvLrcVUkJqa7fOecqw-hSwLXBKC9SQS6rsFAWsx4y_HuCM0Ia2sMjLNjNAMhGOaEslN0ltIaACgXfIaWDz7l6PspW1NtpiF7nFX6rPSgUvLOa5X9GG4rVRmrbchRDf67oGMYfLDVYFUMPqwqtd3GUemPc3Ti1JDsxe-do_enx7f7F7xYPr_e3y2wrlmXsVCtMB11pnxEqWgoAFfU2N6Acq7tVN8LB8T1tqUdFzV0ioE2glujhVGmnqOrQ2-Z_ZpsynI9TjGUSUmBMCaamjSFIgdKxzGlaJ3cRr9RcScJyL03efAmize59yZ3JUMPmVTYsLLxr_n_0A82tXJs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2015584314</pqid></control><display><type>article</type><title>Distributed multi-task classification: a decentralized online learning approach</title><source>SpringerLink Journals - AutoHoldings</source><creator>Zhang, Chi ; Zhao, Peilin ; Hao, Shuji ; Soh, Yeng Chai ; Lee, Bu Sung ; Miao, Chunyan ; Hoi, Steven C. H.</creator><creatorcontrib>Zhang, Chi ; Zhao, Peilin ; Hao, Shuji ; Soh, Yeng Chai ; Lee, Bu Sung ; Miao, Chunyan ; Hoi, Steven C. H.</creatorcontrib><description>Although dispersing one single task to distributed learning nodes has been intensively studied by the previous research, multi-task learning on distributed networks is still an area that has not been fully exploited, especially under decentralized settings. The challenge lies in the fact that different tasks may have different optimal learning weights while communication through the distributed network forces all tasks to converge to an unique classifier. In this paper, we present a novel algorithm to overcome this challenge and enable learning multiple tasks simultaneously on a decentralized distributed network. Specifically, the learning framework can be separated into two phases: (i) multi-task information is shared within each node on the first phase; (ii) communication between nodes then leads the whole network to converge to a common minimizer. Theoretical analysis indicates that our algorithm achieves a O ( T ) regret bound when compared with the best classifier in hindsight, which is further validated by experiments on both synthetic and real-world datasets.</description><identifier>ISSN: 0885-6125</identifier><identifier>EISSN: 1573-0565</identifier><identifier>DOI: 10.1007/s10994-017-5676-y</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Classifiers ; Computer networks ; Computer Science ; Control ; Convergence ; Distance learning ; Machine learning ; Mechatronics ; Natural Language Processing (NLP) ; Robotics ; Simulation and Modeling ; Stress concentration</subject><ispartof>Machine learning, 2018-04, Vol.107 (4), p.727-747</ispartof><rights>The Author(s) 2017</rights><rights>Machine Learning is a copyright of Springer, (2017). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-8a78d92fd12522842006a2debd0aff79abb8f01fbe72968309a50cd86edc8dad3</citedby><cites>FETCH-LOGICAL-c359t-8a78d92fd12522842006a2debd0aff79abb8f01fbe72968309a50cd86edc8dad3</cites><orcidid>0000-0002-5735-4454</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10994-017-5676-y$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10994-017-5676-y$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Zhang, Chi</creatorcontrib><creatorcontrib>Zhao, Peilin</creatorcontrib><creatorcontrib>Hao, Shuji</creatorcontrib><creatorcontrib>Soh, Yeng Chai</creatorcontrib><creatorcontrib>Lee, Bu Sung</creatorcontrib><creatorcontrib>Miao, Chunyan</creatorcontrib><creatorcontrib>Hoi, Steven C. H.</creatorcontrib><title>Distributed multi-task classification: a decentralized online learning approach</title><title>Machine learning</title><addtitle>Mach Learn</addtitle><description>Although dispersing one single task to distributed learning nodes has been intensively studied by the previous research, multi-task learning on distributed networks is still an area that has not been fully exploited, especially under decentralized settings. The challenge lies in the fact that different tasks may have different optimal learning weights while communication through the distributed network forces all tasks to converge to an unique classifier. In this paper, we present a novel algorithm to overcome this challenge and enable learning multiple tasks simultaneously on a decentralized distributed network. Specifically, the learning framework can be separated into two phases: (i) multi-task information is shared within each node on the first phase; (ii) communication between nodes then leads the whole network to converge to a common minimizer. Theoretical analysis indicates that our algorithm achieves a O ( T ) regret bound when compared with the best classifier in hindsight, which is further validated by experiments on both synthetic and real-world datasets.</description><subject>Artificial Intelligence</subject><subject>Classifiers</subject><subject>Computer networks</subject><subject>Computer Science</subject><subject>Control</subject><subject>Convergence</subject><subject>Distance learning</subject><subject>Machine learning</subject><subject>Mechatronics</subject><subject>Natural Language Processing (NLP)</subject><subject>Robotics</subject><subject>Simulation and Modeling</subject><subject>Stress concentration</subject><issn>0885-6125</issn><issn>1573-0565</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp1kDtPwzAUhS0EEqXwA9giMRuundhx2BBvqVIXmC3Hj-KSOsV2hvLrcVUkJqa7fOecqw-hSwLXBKC9SQS6rsFAWsx4y_HuCM0Ia2sMjLNjNAMhGOaEslN0ltIaACgXfIaWDz7l6PspW1NtpiF7nFX6rPSgUvLOa5X9GG4rVRmrbchRDf67oGMYfLDVYFUMPqwqtd3GUemPc3Ti1JDsxe-do_enx7f7F7xYPr_e3y2wrlmXsVCtMB11pnxEqWgoAFfU2N6Acq7tVN8LB8T1tqUdFzV0ioE2glujhVGmnqOrQ2-Z_ZpsynI9TjGUSUmBMCaamjSFIgdKxzGlaJ3cRr9RcScJyL03efAmize59yZ3JUMPmVTYsLLxr_n_0A82tXJs</recordid><startdate>20180401</startdate><enddate>20180401</enddate><creator>Zhang, Chi</creator><creator>Zhao, Peilin</creator><creator>Hao, Shuji</creator><creator>Soh, Yeng Chai</creator><creator>Lee, Bu Sung</creator><creator>Miao, Chunyan</creator><creator>Hoi, Steven C. H.</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7XB</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M2P</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-5735-4454</orcidid></search><sort><creationdate>20180401</creationdate><title>Distributed multi-task classification: a decentralized online learning approach</title><author>Zhang, Chi ; Zhao, Peilin ; Hao, Shuji ; Soh, Yeng Chai ; Lee, Bu Sung ; Miao, Chunyan ; Hoi, Steven C. H.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-8a78d92fd12522842006a2debd0aff79abb8f01fbe72968309a50cd86edc8dad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Artificial Intelligence</topic><topic>Classifiers</topic><topic>Computer networks</topic><topic>Computer Science</topic><topic>Control</topic><topic>Convergence</topic><topic>Distance learning</topic><topic>Machine learning</topic><topic>Mechatronics</topic><topic>Natural Language Processing (NLP)</topic><topic>Robotics</topic><topic>Simulation and Modeling</topic><topic>Stress concentration</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Chi</creatorcontrib><creatorcontrib>Zhao, Peilin</creatorcontrib><creatorcontrib>Hao, Shuji</creatorcontrib><creatorcontrib>Soh, Yeng Chai</creatorcontrib><creatorcontrib>Lee, Bu Sung</creatorcontrib><creatorcontrib>Miao, Chunyan</creatorcontrib><creatorcontrib>Hoi, Steven C. H.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Science Database</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Machine learning</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Chi</au><au>Zhao, Peilin</au><au>Hao, Shuji</au><au>Soh, Yeng Chai</au><au>Lee, Bu Sung</au><au>Miao, Chunyan</au><au>Hoi, Steven C. H.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Distributed multi-task classification: a decentralized online learning approach</atitle><jtitle>Machine learning</jtitle><stitle>Mach Learn</stitle><date>2018-04-01</date><risdate>2018</risdate><volume>107</volume><issue>4</issue><spage>727</spage><epage>747</epage><pages>727-747</pages><issn>0885-6125</issn><eissn>1573-0565</eissn><abstract>Although dispersing one single task to distributed learning nodes has been intensively studied by the previous research, multi-task learning on distributed networks is still an area that has not been fully exploited, especially under decentralized settings. The challenge lies in the fact that different tasks may have different optimal learning weights while communication through the distributed network forces all tasks to converge to an unique classifier. In this paper, we present a novel algorithm to overcome this challenge and enable learning multiple tasks simultaneously on a decentralized distributed network. Specifically, the learning framework can be separated into two phases: (i) multi-task information is shared within each node on the first phase; (ii) communication between nodes then leads the whole network to converge to a common minimizer. Theoretical analysis indicates that our algorithm achieves a O ( T ) regret bound when compared with the best classifier in hindsight, which is further validated by experiments on both synthetic and real-world datasets.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10994-017-5676-y</doi><tpages>21</tpages><orcidid>https://orcid.org/0000-0002-5735-4454</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0885-6125
ispartof Machine learning, 2018-04, Vol.107 (4), p.727-747
issn 0885-6125
1573-0565
language eng
recordid cdi_proquest_journals_2015584314
source SpringerLink Journals - AutoHoldings
subjects Artificial Intelligence
Classifiers
Computer networks
Computer Science
Control
Convergence
Distance learning
Machine learning
Mechatronics
Natural Language Processing (NLP)
Robotics
Simulation and Modeling
Stress concentration
title Distributed multi-task classification: a decentralized online learning approach
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T16%3A01%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Distributed%20multi-task%20classification:%20a%20decentralized%20online%20learning%20approach&rft.jtitle=Machine%20learning&rft.au=Zhang,%20Chi&rft.date=2018-04-01&rft.volume=107&rft.issue=4&rft.spage=727&rft.epage=747&rft.pages=727-747&rft.issn=0885-6125&rft.eissn=1573-0565&rft_id=info:doi/10.1007/s10994-017-5676-y&rft_dat=%3Cproquest_cross%3E2015584314%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2015584314&rft_id=info:pmid/&rfr_iscdi=true