Symbolic Working Memory Enhances Language Models for Complex Rule Application
Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning involving a series of rule application steps, especially when rules are presented non-sequentially. Our preliminary analysis shows that while LLMs excel in single-step rule appli...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) have shown remarkable reasoning performance but
struggle with multi-step deductive reasoning involving a series of rule
application steps, especially when rules are presented non-sequentially. Our
preliminary analysis shows that while LLMs excel in single-step rule
application, their performance drops significantly in multi-step scenarios due
to the challenge in rule grounding. It requires anchoring the applicable rule
and supporting facts at each step, amidst multiple input rules, facts, and
inferred facts. To address this, we propose augmenting LLMs with external
working memory and introduce a neurosymbolic framework for rule application.
The memory stores facts and rules in both natural language and symbolic forms,
enabling precise tracking. Utilizing this memory, our framework iteratively
performs symbolic rule grounding and LLM-based rule implementation. The former
matches predicates and variables of symbolic rules and facts to ground
applicable rules at each step. Experiments indicate our framework's
effectiveness in rule application and its robustness across various steps and
settings~\footnote{Code and data are available at
\url{https://github.com/SiyuanWangw/RuleApplication}.}. |
---|---|
DOI: | 10.48550/arxiv.2408.13654 |