GEAR: Augmenting Language Models with Generalizable and Efficient Tool Resolution
Augmenting large language models (LLM) to use external tools enhances their performance across a variety of tasks. However, prior works over-rely on task-specific demonstration of tool use that limits their generalizability and computational cost due to making many calls to large-scale LLMs. We intr...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Augmenting large language models (LLM) to use external tools enhances their
performance across a variety of tasks. However, prior works over-rely on
task-specific demonstration of tool use that limits their generalizability and
computational cost due to making many calls to large-scale LLMs. We introduce
GEAR, a computationally efficient query-tool grounding algorithm that is
generalizable to various tasks that require tool use while not relying on
task-specific demonstrations. GEAR achieves better efficiency by delegating
tool grounding and execution to small language models (SLM) and LLM,
respectively; while leveraging semantic and pattern-based evaluation at both
question and answer levels for generalizable tool grounding. We evaluate GEAR
on 14 datasets across 6 downstream tasks, demonstrating its strong
generalizability to novel tasks, tools and different SLMs. Despite offering
more efficiency, GEAR achieves higher precision in tool grounding compared to
prior strategies using LLM prompting, thus improving downstream accuracy at a
reduced computational cost. For example, we demonstrate that GEAR-augmented
GPT-J and GPT-3 outperform counterpart tool-augmented baselines because of
better tool use. |
---|---|
DOI: | 10.48550/arxiv.2307.08775 |