Focused ReAct: Improving ReAct through Reiterate and Early Stop
Large language models (LLMs) have significantly improved their reasoning and decision-making capabilities, as seen in methods like ReAct. However, despite its effectiveness in tackling complex tasks, ReAct faces two main challenges: losing focus on the original question and becoming stuck in action...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have significantly improved their reasoning and
decision-making capabilities, as seen in methods like ReAct. However, despite
its effectiveness in tackling complex tasks, ReAct faces two main challenges:
losing focus on the original question and becoming stuck in action loops. To
address these issues, we introduce Focused ReAct, an enhanced version of the
ReAct paradigm that incorporates reiteration and early stop mechanisms. These
improvements help the model stay focused on the original query and avoid
repetitive behaviors. Experimental results show accuracy gains of 18% to 530%
and a runtime reduction of up to 34% compared to the original ReAct method. |
---|---|
DOI: | 10.48550/arxiv.2410.10779 |