Coded Distributed Image Classification
In this paper, we present a coded computation (CC) scheme for distributed computation of the inference phase of machine learning (ML) tasks, specifically, the task of image classification. Building upon Agrawal et al.~2022, the proposed scheme combines the strengths of deep learning and Lagrange int...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Tang, Jiepeng Agrawal, Navneet Stanczak, Slawomir Zhu, Jingge |
description | In this paper, we present a coded computation (CC) scheme for distributed
computation of the inference phase of machine learning (ML) tasks,
specifically, the task of image classification. Building upon Agrawal et
al.~2022, the proposed scheme combines the strengths of deep learning and
Lagrange interpolation technique to mitigate the effect of straggling workers,
and recovers approximate results with reasonable accuracy using outputs from
any $R$ out of $N$ workers, where $R\leq N$. Our proposed scheme guarantees a
minimum recovery threshold $R$ for non-polynomial problems, which can be
adjusted as a tunable parameter in the system. Moreover, unlike existing
schemes, our scheme maintains flexibility with respect to worker availability
and system design. We propose two system designs for our CC scheme that allows
flexibility in distributing the computational load between the master and the
workers based on the accessibility of input data. Our experimental results
demonstrate the superiority of our scheme compared to the state-of-the-art CC
schemes for image classification tasks, and pave the path for designing new
schemes for distributed computation of any general ML classification tasks. |
doi_str_mv | 10.48550/arxiv.2307.04915 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2307_04915</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2307_04915</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-aaf1e6e6274ffe327a1e1e5ac582848f9a5eaf9af8f34687dd2ecd91f811a6df3</originalsourceid><addsrcrecordid>eNotzjkLwkAUBOBtLCT6A6y0skvM3ptS4gkBG_vwzL4nC_EgiaL_3rOZmWr4GBvxNFFO63QGzSPcEyFTm6Qq47rPpvnFo58sQts14XDr3nt7giNO8hraNlCooAuX84D1COoWh_-O2H613OebuNitt_m8iMFYHQMQR4NGWEWEUljgyFFDpZ1wylEGGuGd5Egq46z3AiufcXKcg_EkIzb-3X6h5bUJJ2ie5QdcfsHyBYmIOzk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Coded Distributed Image Classification</title><source>arXiv.org</source><creator>Tang, Jiepeng ; Agrawal, Navneet ; Stanczak, Slawomir ; Zhu, Jingge</creator><creatorcontrib>Tang, Jiepeng ; Agrawal, Navneet ; Stanczak, Slawomir ; Zhu, Jingge</creatorcontrib><description>In this paper, we present a coded computation (CC) scheme for distributed
computation of the inference phase of machine learning (ML) tasks,
specifically, the task of image classification. Building upon Agrawal et
al.~2022, the proposed scheme combines the strengths of deep learning and
Lagrange interpolation technique to mitigate the effect of straggling workers,
and recovers approximate results with reasonable accuracy using outputs from
any $R$ out of $N$ workers, where $R\leq N$. Our proposed scheme guarantees a
minimum recovery threshold $R$ for non-polynomial problems, which can be
adjusted as a tunable parameter in the system. Moreover, unlike existing
schemes, our scheme maintains flexibility with respect to worker availability
and system design. We propose two system designs for our CC scheme that allows
flexibility in distributing the computational load between the master and the
workers based on the accessibility of input data. Our experimental results
demonstrate the superiority of our scheme compared to the state-of-the-art CC
schemes for image classification tasks, and pave the path for designing new
schemes for distributed computation of any general ML classification tasks.</description><identifier>DOI: 10.48550/arxiv.2307.04915</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><creationdate>2023-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2307.04915$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.04915$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Tang, Jiepeng</creatorcontrib><creatorcontrib>Agrawal, Navneet</creatorcontrib><creatorcontrib>Stanczak, Slawomir</creatorcontrib><creatorcontrib>Zhu, Jingge</creatorcontrib><title>Coded Distributed Image Classification</title><description>In this paper, we present a coded computation (CC) scheme for distributed
computation of the inference phase of machine learning (ML) tasks,
specifically, the task of image classification. Building upon Agrawal et
al.~2022, the proposed scheme combines the strengths of deep learning and
Lagrange interpolation technique to mitigate the effect of straggling workers,
and recovers approximate results with reasonable accuracy using outputs from
any $R$ out of $N$ workers, where $R\leq N$. Our proposed scheme guarantees a
minimum recovery threshold $R$ for non-polynomial problems, which can be
adjusted as a tunable parameter in the system. Moreover, unlike existing
schemes, our scheme maintains flexibility with respect to worker availability
and system design. We propose two system designs for our CC scheme that allows
flexibility in distributing the computational load between the master and the
workers based on the accessibility of input data. Our experimental results
demonstrate the superiority of our scheme compared to the state-of-the-art CC
schemes for image classification tasks, and pave the path for designing new
schemes for distributed computation of any general ML classification tasks.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzjkLwkAUBOBtLCT6A6y0skvM3ptS4gkBG_vwzL4nC_EgiaL_3rOZmWr4GBvxNFFO63QGzSPcEyFTm6Qq47rPpvnFo58sQts14XDr3nt7giNO8hraNlCooAuX84D1COoWh_-O2H613OebuNitt_m8iMFYHQMQR4NGWEWEUljgyFFDpZ1wylEGGuGd5Egq46z3AiufcXKcg_EkIzb-3X6h5bUJJ2ie5QdcfsHyBYmIOzk</recordid><startdate>20230710</startdate><enddate>20230710</enddate><creator>Tang, Jiepeng</creator><creator>Agrawal, Navneet</creator><creator>Stanczak, Slawomir</creator><creator>Zhu, Jingge</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230710</creationdate><title>Coded Distributed Image Classification</title><author>Tang, Jiepeng ; Agrawal, Navneet ; Stanczak, Slawomir ; Zhu, Jingge</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-aaf1e6e6274ffe327a1e1e5ac582848f9a5eaf9af8f34687dd2ecd91f811a6df3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Tang, Jiepeng</creatorcontrib><creatorcontrib>Agrawal, Navneet</creatorcontrib><creatorcontrib>Stanczak, Slawomir</creatorcontrib><creatorcontrib>Zhu, Jingge</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tang, Jiepeng</au><au>Agrawal, Navneet</au><au>Stanczak, Slawomir</au><au>Zhu, Jingge</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Coded Distributed Image Classification</atitle><date>2023-07-10</date><risdate>2023</risdate><abstract>In this paper, we present a coded computation (CC) scheme for distributed
computation of the inference phase of machine learning (ML) tasks,
specifically, the task of image classification. Building upon Agrawal et
al.~2022, the proposed scheme combines the strengths of deep learning and
Lagrange interpolation technique to mitigate the effect of straggling workers,
and recovers approximate results with reasonable accuracy using outputs from
any $R$ out of $N$ workers, where $R\leq N$. Our proposed scheme guarantees a
minimum recovery threshold $R$ for non-polynomial problems, which can be
adjusted as a tunable parameter in the system. Moreover, unlike existing
schemes, our scheme maintains flexibility with respect to worker availability
and system design. We propose two system designs for our CC scheme that allows
flexibility in distributing the computational load between the master and the
workers based on the accessibility of input data. Our experimental results
demonstrate the superiority of our scheme compared to the state-of-the-art CC
schemes for image classification tasks, and pave the path for designing new
schemes for distributed computation of any general ML classification tasks.</abstract><doi>10.48550/arxiv.2307.04915</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2307.04915 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2307_04915 |
source | arXiv.org |
subjects | Computer Science - Distributed, Parallel, and Cluster Computing |
title | Coded Distributed Image Classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T03%3A17%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Coded%20Distributed%20Image%20Classification&rft.au=Tang,%20Jiepeng&rft.date=2023-07-10&rft_id=info:doi/10.48550/arxiv.2307.04915&rft_dat=%3Carxiv_GOX%3E2307_04915%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |