eFedLLM: Efficient LLM Inference Based on Federated Learning
Large Language Models (LLMs) herald a transformative era in artificial intelligence (AI). However, the expansive scale of data and parameters of LLMs requires high-demand computational and memory resources, restricting their accessibility to a broader range of users and researchers. This paper intro...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Ding, Shengwen Hu, Chenhui |
description | Large Language Models (LLMs) herald a transformative era in artificial
intelligence (AI). However, the expansive scale of data and parameters of LLMs
requires high-demand computational and memory resources, restricting their
accessibility to a broader range of users and researchers. This paper
introduces an effective approach that enhances the operational efficiency and
affordability of LLM inference. By utilizing transformer-based federated
learning (FL) with model-parallel distributed training, our model efficiently
distributes the computational loads and memory requirements across a network of
participants. This strategy permits users, especially those with limited
resources to train state-of-the-art LLMs collaboratively. We also innovate an
incentive mechanism within the FL framework, rewarding constructive
contributions and filtering out malicious activities, thereby safeguarding the
integrity and reliability of the training process. Concurrently, we leverage
memory hierarchy strategies and Singular Value Decomposition (SVD) on weight
matrices to boost computational and memory efficiencies further. Our results,
derived from formulaic analyses and numerical calculations, demonstrate
significant optimization of resource use and democratize access to cutting-edge
LLMs, ensuring that a wide scale of users can both contribute to and benefit
from these advanced models. |
doi_str_mv | 10.48550/arxiv.2411.16003 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_16003</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_16003</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_160033</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DM0MzAw5mSwSXVLTfHx8bVScE1Ly0zOTM0rUQByFTzz0lKLUvOSUxWcEotTUxTy8xSAClOLEkuAHJ_UxKK8zLx0HgbWtMSc4lReKM3NIO_mGuLsoQu2J76gKDM3sagyHmRfPNg-Y8IqADYUM4o</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>eFedLLM: Efficient LLM Inference Based on Federated Learning</title><source>arXiv.org</source><creator>Ding, Shengwen ; Hu, Chenhui</creator><creatorcontrib>Ding, Shengwen ; Hu, Chenhui</creatorcontrib><description>Large Language Models (LLMs) herald a transformative era in artificial
intelligence (AI). However, the expansive scale of data and parameters of LLMs
requires high-demand computational and memory resources, restricting their
accessibility to a broader range of users and researchers. This paper
introduces an effective approach that enhances the operational efficiency and
affordability of LLM inference. By utilizing transformer-based federated
learning (FL) with model-parallel distributed training, our model efficiently
distributes the computational loads and memory requirements across a network of
participants. This strategy permits users, especially those with limited
resources to train state-of-the-art LLMs collaboratively. We also innovate an
incentive mechanism within the FL framework, rewarding constructive
contributions and filtering out malicious activities, thereby safeguarding the
integrity and reliability of the training process. Concurrently, we leverage
memory hierarchy strategies and Singular Value Decomposition (SVD) on weight
matrices to boost computational and memory efficiencies further. Our results,
derived from formulaic analyses and numerical calculations, demonstrate
significant optimization of resource use and democratize access to cutting-edge
LLMs, ensuring that a wide scale of users can both contribute to and benefit
from these advanced models.</description><identifier>DOI: 10.48550/arxiv.2411.16003</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2024-11</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.16003$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.16003$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ding, Shengwen</creatorcontrib><creatorcontrib>Hu, Chenhui</creatorcontrib><title>eFedLLM: Efficient LLM Inference Based on Federated Learning</title><description>Large Language Models (LLMs) herald a transformative era in artificial
intelligence (AI). However, the expansive scale of data and parameters of LLMs
requires high-demand computational and memory resources, restricting their
accessibility to a broader range of users and researchers. This paper
introduces an effective approach that enhances the operational efficiency and
affordability of LLM inference. By utilizing transformer-based federated
learning (FL) with model-parallel distributed training, our model efficiently
distributes the computational loads and memory requirements across a network of
participants. This strategy permits users, especially those with limited
resources to train state-of-the-art LLMs collaboratively. We also innovate an
incentive mechanism within the FL framework, rewarding constructive
contributions and filtering out malicious activities, thereby safeguarding the
integrity and reliability of the training process. Concurrently, we leverage
memory hierarchy strategies and Singular Value Decomposition (SVD) on weight
matrices to boost computational and memory efficiencies further. Our results,
derived from formulaic analyses and numerical calculations, demonstrate
significant optimization of resource use and democratize access to cutting-edge
LLMs, ensuring that a wide scale of users can both contribute to and benefit
from these advanced models.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DM0MzAw5mSwSXVLTfHx8bVScE1Ly0zOTM0rUQByFTzz0lKLUvOSUxWcEotTUxTy8xSAClOLEkuAHJ_UxKK8zLx0HgbWtMSc4lReKM3NIO_mGuLsoQu2J76gKDM3sagyHmRfPNg-Y8IqADYUM4o</recordid><startdate>20241124</startdate><enddate>20241124</enddate><creator>Ding, Shengwen</creator><creator>Hu, Chenhui</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241124</creationdate><title>eFedLLM: Efficient LLM Inference Based on Federated Learning</title><author>Ding, Shengwen ; Hu, Chenhui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_160033</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ding, Shengwen</creatorcontrib><creatorcontrib>Hu, Chenhui</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ding, Shengwen</au><au>Hu, Chenhui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>eFedLLM: Efficient LLM Inference Based on Federated Learning</atitle><date>2024-11-24</date><risdate>2024</risdate><abstract>Large Language Models (LLMs) herald a transformative era in artificial
intelligence (AI). However, the expansive scale of data and parameters of LLMs
requires high-demand computational and memory resources, restricting their
accessibility to a broader range of users and researchers. This paper
introduces an effective approach that enhances the operational efficiency and
affordability of LLM inference. By utilizing transformer-based federated
learning (FL) with model-parallel distributed training, our model efficiently
distributes the computational loads and memory requirements across a network of
participants. This strategy permits users, especially those with limited
resources to train state-of-the-art LLMs collaboratively. We also innovate an
incentive mechanism within the FL framework, rewarding constructive
contributions and filtering out malicious activities, thereby safeguarding the
integrity and reliability of the training process. Concurrently, we leverage
memory hierarchy strategies and Singular Value Decomposition (SVD) on weight
matrices to boost computational and memory efficiencies further. Our results,
derived from formulaic analyses and numerical calculations, demonstrate
significant optimization of resource use and democratize access to cutting-edge
LLMs, ensuring that a wide scale of users can both contribute to and benefit
from these advanced models.</abstract><doi>10.48550/arxiv.2411.16003</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2411.16003 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2411_16003 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning |
title | eFedLLM: Efficient LLM Inference Based on Federated Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T08%3A32%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=eFedLLM:%20Efficient%20LLM%20Inference%20Based%20on%20Federated%20Learning&rft.au=Ding,%20Shengwen&rft.date=2024-11-24&rft_id=info:doi/10.48550/arxiv.2411.16003&rft_dat=%3Carxiv_GOX%3E2411_16003%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |