Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation
Large Language Models (LLMs) demonstrate promising capabilities in solving simple scientific problems but often produce hallucinations for complex ones. While integrating LLMs with tools can increase reliability, this approach typically results in over-reliance on tools, diminishing the model's...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) demonstrate promising capabilities in solving
simple scientific problems but often produce hallucinations for complex ones.
While integrating LLMs with tools can increase reliability, this approach
typically results in over-reliance on tools, diminishing the model's ability to
solve simple problems through basic reasoning. In contrast, human experts first
assess problem complexity using domain knowledge before choosing an appropriate
solution approach. Inspired by this human problem-solving process, we propose a
novel two-component fine-tuning method. In the first component World Knowledge
Distillation (WKD), LLMs learn directly from solutions generated using tool's
information to internalize domain knowledge. In the second component Tool Usage
Adaptation (TUA), we partition problems into easy and hard categories based on
the model's direct answering accuracy. While maintaining the same alignment
target for easy problems as in WKD, we train the model to intelligently switch
to tool usage for more challenging problems. We validate our method on six
scientific benchmark datasets, spanning mathematics, climate science and
epidemiology. On average, our models demonstrate a 28.18% improvement in answer
accuracy and a 13.89% increase in tool usage precision across all datasets,
surpassing state-of-the-art models including GPT-4o and Claude-3.5. |
---|---|
DOI: | 10.48550/arxiv.2411.00412 |