LinguaLinked: A Distributed Large Language Model Inference System for Mobile Devices

Deploying Large Language Models (LLMs) locally on mobile devices presents a significant challenge due to their extensive memory requirements. In this paper, we introduce LinguaLinked, a system for decentralized, distributed LLM inference on mobile devices. LinguaLinked enables collaborative executio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhao, Junchen, Song, Yurun, Liu, Simeng, Harris, Ian G, Jyothi, Sangeetha Abdu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhao, Junchen
Song, Yurun
Liu, Simeng
Harris, Ian G
Jyothi, Sangeetha Abdu
description Deploying Large Language Models (LLMs) locally on mobile devices presents a significant challenge due to their extensive memory requirements. In this paper, we introduce LinguaLinked, a system for decentralized, distributed LLM inference on mobile devices. LinguaLinked enables collaborative execution of the inference task across multiple trusted devices. LinguaLinked ensures data privacy by processing information locally. LinguaLinked uses three key strategies. First, an optimized model assignment technique segments LLMs and uses linear optimization to align segments with each device's capabilities. Second, an optimized data transmission mechanism ensures efficient and structured data flow between model segments while also maintaining the integrity of the original model structure. Finally, LinguaLinked incorporates a runtime load balancer that actively monitors and redistributes tasks among mobile devices to prevent bottlenecks, enhancing the system's overall efficiency and responsiveness. We demonstrate that LinguaLinked facilitates efficient LLM inference while maintaining consistent throughput and minimal latency through extensive testing across various mobile devices, from high-end to low-end Android devices. In our evaluations, compared to the baseline, LinguaLinked achieves an inference performance acceleration of $1.11\times$ to $1.61\times$ in single-threaded settings, $1.73\times$ to $2.65\times$ with multi-threading. Additionally, runtime load balancing yields an overall inference acceleration of $1.29\times$ to $1.32\times$.
doi_str_mv 10.48550/arxiv.2312.00388
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_00388</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_00388</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-c774e0f8e8d5e2039111356819dba46f0a132bb726638a070ff8d8f57dde628a3</originalsourceid><addsrcrecordid>eNotj8tuwjAURL3poqL9gK7qH0jqR2xf2CHoAylVF80-uomvkUUIlRNQ-fsGymZmpBmNdBh7kiIvwBjxguk3nnKlpcqF0AD3rCpjvz3ipDvyC77k6ziMKTbHkTwvMW1p0stiCp8HTx3f9IES9S3x7_Mw0p6HQ5qqJnbE13SKLQ0P7C5gN9DjzWesenutVh9Z-fW-WS3LDK2DrHWuIBGAwBtSQs-llNpYkHPfYGGDQKlV0zhlrQYUToQAHoJx3pNVgHrGnv9vr1j1T4p7TOf6gldf8fQf9bFJ7A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>LinguaLinked: A Distributed Large Language Model Inference System for Mobile Devices</title><source>arXiv.org</source><creator>Zhao, Junchen ; Song, Yurun ; Liu, Simeng ; Harris, Ian G ; Jyothi, Sangeetha Abdu</creator><creatorcontrib>Zhao, Junchen ; Song, Yurun ; Liu, Simeng ; Harris, Ian G ; Jyothi, Sangeetha Abdu</creatorcontrib><description>Deploying Large Language Models (LLMs) locally on mobile devices presents a significant challenge due to their extensive memory requirements. In this paper, we introduce LinguaLinked, a system for decentralized, distributed LLM inference on mobile devices. LinguaLinked enables collaborative execution of the inference task across multiple trusted devices. LinguaLinked ensures data privacy by processing information locally. LinguaLinked uses three key strategies. First, an optimized model assignment technique segments LLMs and uses linear optimization to align segments with each device's capabilities. Second, an optimized data transmission mechanism ensures efficient and structured data flow between model segments while also maintaining the integrity of the original model structure. Finally, LinguaLinked incorporates a runtime load balancer that actively monitors and redistributes tasks among mobile devices to prevent bottlenecks, enhancing the system's overall efficiency and responsiveness. We demonstrate that LinguaLinked facilitates efficient LLM inference while maintaining consistent throughput and minimal latency through extensive testing across various mobile devices, from high-end to low-end Android devices. In our evaluations, compared to the baseline, LinguaLinked achieves an inference performance acceleration of $1.11\times$ to $1.61\times$ in single-threaded settings, $1.73\times$ to $2.65\times$ with multi-threading. Additionally, runtime load balancing yields an overall inference acceleration of $1.29\times$ to $1.32\times$.</description><identifier>DOI: 10.48550/arxiv.2312.00388</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Learning ; Computer Science - Networking and Internet Architecture</subject><creationdate>2023-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.00388$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.00388$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhao, Junchen</creatorcontrib><creatorcontrib>Song, Yurun</creatorcontrib><creatorcontrib>Liu, Simeng</creatorcontrib><creatorcontrib>Harris, Ian G</creatorcontrib><creatorcontrib>Jyothi, Sangeetha Abdu</creatorcontrib><title>LinguaLinked: A Distributed Large Language Model Inference System for Mobile Devices</title><description>Deploying Large Language Models (LLMs) locally on mobile devices presents a significant challenge due to their extensive memory requirements. In this paper, we introduce LinguaLinked, a system for decentralized, distributed LLM inference on mobile devices. LinguaLinked enables collaborative execution of the inference task across multiple trusted devices. LinguaLinked ensures data privacy by processing information locally. LinguaLinked uses three key strategies. First, an optimized model assignment technique segments LLMs and uses linear optimization to align segments with each device's capabilities. Second, an optimized data transmission mechanism ensures efficient and structured data flow between model segments while also maintaining the integrity of the original model structure. Finally, LinguaLinked incorporates a runtime load balancer that actively monitors and redistributes tasks among mobile devices to prevent bottlenecks, enhancing the system's overall efficiency and responsiveness. We demonstrate that LinguaLinked facilitates efficient LLM inference while maintaining consistent throughput and minimal latency through extensive testing across various mobile devices, from high-end to low-end Android devices. In our evaluations, compared to the baseline, LinguaLinked achieves an inference performance acceleration of $1.11\times$ to $1.61\times$ in single-threaded settings, $1.73\times$ to $2.65\times$ with multi-threading. Additionally, runtime load balancing yields an overall inference acceleration of $1.29\times$ to $1.32\times$.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Networking and Internet Architecture</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tuwjAURL3poqL9gK7qH0jqR2xf2CHoAylVF80-uomvkUUIlRNQ-fsGymZmpBmNdBh7kiIvwBjxguk3nnKlpcqF0AD3rCpjvz3ipDvyC77k6ziMKTbHkTwvMW1p0stiCp8HTx3f9IES9S3x7_Mw0p6HQ5qqJnbE13SKLQ0P7C5gN9DjzWesenutVh9Z-fW-WS3LDK2DrHWuIBGAwBtSQs-llNpYkHPfYGGDQKlV0zhlrQYUToQAHoJx3pNVgHrGnv9vr1j1T4p7TOf6gldf8fQf9bFJ7A</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Zhao, Junchen</creator><creator>Song, Yurun</creator><creator>Liu, Simeng</creator><creator>Harris, Ian G</creator><creator>Jyothi, Sangeetha Abdu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231201</creationdate><title>LinguaLinked: A Distributed Large Language Model Inference System for Mobile Devices</title><author>Zhao, Junchen ; Song, Yurun ; Liu, Simeng ; Harris, Ian G ; Jyothi, Sangeetha Abdu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-c774e0f8e8d5e2039111356819dba46f0a132bb726638a070ff8d8f57dde628a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Networking and Internet Architecture</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Junchen</creatorcontrib><creatorcontrib>Song, Yurun</creatorcontrib><creatorcontrib>Liu, Simeng</creatorcontrib><creatorcontrib>Harris, Ian G</creatorcontrib><creatorcontrib>Jyothi, Sangeetha Abdu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhao, Junchen</au><au>Song, Yurun</au><au>Liu, Simeng</au><au>Harris, Ian G</au><au>Jyothi, Sangeetha Abdu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LinguaLinked: A Distributed Large Language Model Inference System for Mobile Devices</atitle><date>2023-12-01</date><risdate>2023</risdate><abstract>Deploying Large Language Models (LLMs) locally on mobile devices presents a significant challenge due to their extensive memory requirements. In this paper, we introduce LinguaLinked, a system for decentralized, distributed LLM inference on mobile devices. LinguaLinked enables collaborative execution of the inference task across multiple trusted devices. LinguaLinked ensures data privacy by processing information locally. LinguaLinked uses three key strategies. First, an optimized model assignment technique segments LLMs and uses linear optimization to align segments with each device's capabilities. Second, an optimized data transmission mechanism ensures efficient and structured data flow between model segments while also maintaining the integrity of the original model structure. Finally, LinguaLinked incorporates a runtime load balancer that actively monitors and redistributes tasks among mobile devices to prevent bottlenecks, enhancing the system's overall efficiency and responsiveness. We demonstrate that LinguaLinked facilitates efficient LLM inference while maintaining consistent throughput and minimal latency through extensive testing across various mobile devices, from high-end to low-end Android devices. In our evaluations, compared to the baseline, LinguaLinked achieves an inference performance acceleration of $1.11\times$ to $1.61\times$ in single-threaded settings, $1.73\times$ to $2.65\times$ with multi-threading. Additionally, runtime load balancing yields an overall inference acceleration of $1.29\times$ to $1.32\times$.</abstract><doi>10.48550/arxiv.2312.00388</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2312.00388
ispartof
issn
language eng
recordid cdi_arxiv_primary_2312_00388
source arXiv.org
subjects Computer Science - Distributed, Parallel, and Cluster Computing
Computer Science - Learning
Computer Science - Networking and Internet Architecture
title LinguaLinked: A Distributed Large Language Model Inference System for Mobile Devices
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T10%3A08%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LinguaLinked:%20A%20Distributed%20Large%20Language%20Model%20Inference%20System%20for%20Mobile%20Devices&rft.au=Zhao,%20Junchen&rft.date=2023-12-01&rft_id=info:doi/10.48550/arxiv.2312.00388&rft_dat=%3Carxiv_GOX%3E2312_00388%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true