Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models
In real world, large language models (LLMs) can serve as the assistant to help users accomplish their jobs, and also support the development of advanced applications. For the wide application of LLMs, the inference efficiency is an essential concern, which has been widely studied in existing work, a...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In real world, large language models (LLMs) can serve as the assistant to
help users accomplish their jobs, and also support the development of advanced
applications. For the wide application of LLMs, the inference efficiency is an
essential concern, which has been widely studied in existing work, and numerous
optimization algorithms and code libraries have been proposed to improve it.
Nonetheless, users still find it challenging to compare the effectiveness of
all the above methods and understand the underlying mechanisms. In this work,
we perform a detailed coarse-to-fine analysis of the inference performance of
various code libraries. To evaluate the overall effectiveness, we examine four
usage scenarios within two practical applications. We further provide both
theoretical and empirical fine-grained analyses of each module in the
Transformer architecture. Our experiments yield comprehensive results that are
invaluable for researchers to evaluate code libraries and improve inference
strategies. |
---|---|
DOI: | 10.48550/arxiv.2404.11502 |