eFedLLM: Efficient LLM Inference Based on Federated Learning
Large Language Models (LLMs) herald a transformative era in artificial intelligence (AI). However, the expansive scale of data and parameters of LLMs requires high-demand computational and memory resources, restricting their accessibility to a broader range of users and researchers. This paper intro...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) herald a transformative era in artificial
intelligence (AI). However, the expansive scale of data and parameters of LLMs
requires high-demand computational and memory resources, restricting their
accessibility to a broader range of users and researchers. This paper
introduces an effective approach that enhances the operational efficiency and
affordability of LLM inference. By utilizing transformer-based federated
learning (FL) with model-parallel distributed training, our model efficiently
distributes the computational loads and memory requirements across a network of
participants. This strategy permits users, especially those with limited
resources to train state-of-the-art LLMs collaboratively. We also innovate an
incentive mechanism within the FL framework, rewarding constructive
contributions and filtering out malicious activities, thereby safeguarding the
integrity and reliability of the training process. Concurrently, we leverage
memory hierarchy strategies and Singular Value Decomposition (SVD) on weight
matrices to boost computational and memory efficiencies further. Our results,
derived from formulaic analyses and numerical calculations, demonstrate
significant optimization of resource use and democratize access to cutting-edge
LLMs, ensuring that a wide scale of users can both contribute to and benefit
from these advanced models. |
---|---|
DOI: | 10.48550/arxiv.2411.16003 |