An Opportunistically Parallel Lambda Calculus for Performant Composition of Large Language Models
Large language models (LLMs) have shown impressive results at a wide-range of tasks. However, they have limitations, such as hallucinating facts and struggling with arithmetic. Recent work has addressed these issues with sophisticated decoding techniques. However, performant decoding, particularly f...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have shown impressive results at a wide-range of
tasks. However, they have limitations, such as hallucinating facts and
struggling with arithmetic. Recent work has addressed these issues with
sophisticated decoding techniques. However, performant decoding, particularly
for sophisticated techniques, relies crucially on parallelization and batching,
which are difficult for developers.
We make two observations: 1) existing approaches are high-level
domain-specific languages for gluing expensive black-box calls, but are not
general or compositional; 2) LLM programs are essentially pure (all effects
commute). Guided by these observations, we develop a novel, general-purpose
lambda calculus for automatically parallelizing a wide-range of LLM
interactions, without user intervention. The key difference versus standard
lambda calculus is a novel "opportunistic" evaluation strategy, which steps
independent parts of a program in parallel, dispatching black-box external
calls as eagerly as possible, even while data-independent parts of the program
are waiting for their own external calls to return. To maintain the simplicity
of the language and to ensure uniformity of opportunistic evaluation,
control-flow and looping constructs are implemented in-language, via Church
encodings.
We implement this approach in a framework called EPIC, embedded in--and
interoperating closely with--Python. We demonstrate its versatility and
performance with three case studies drawn from the machine learning literature:
Tree-of-Thoughts (LLMs embedded in classic search procedures), nested tool use,
and constrained decoding. Our experiments show that opportunistic evaluation
offers a $1.5\times$ to $4.8\times$ speedup over sequential evaluation, while
still allowing practitioners to write straightforward and composable programs,
without any manual parallelism or batching. |
---|---|
DOI: | 10.48550/arxiv.2405.11361 |