Federated Prompting and Chain-of-Thought Reasoning for Improving LLMs Answering
We investigate how to enhance answer precision in frequently asked questions posed by distributed users using cloud-based Large Language Models (LLMs). Our study focuses on a typical situations where users ask similar queries that involve identical mathematical reasoning steps and problem-solving pr...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We investigate how to enhance answer precision in frequently asked questions
posed by distributed users using cloud-based Large Language Models (LLMs). Our
study focuses on a typical situations where users ask similar queries that
involve identical mathematical reasoning steps and problem-solving procedures.
Due to the unsatisfactory accuracy of LLMs' zero-shot prompting with standalone
questions, we propose to improve the distributed synonymous questions using
Self-Consistency (SC) and Chain-of-Thought (CoT) techniques. Specifically, we
first retrieve synonymous questions from a crowd-sourced database and create a
federated question pool. We call these federated synonymous questions with the
same or different parameters SP-questions or DP-questions, respectively. We
refer to our methods as Fed-SP-SC and Fed-DP-CoT, which can generate
significantly more accurate answers for all user queries without requiring
sophisticated model-tuning. Through extensive experiments, we demonstrate that
our proposed methods can significantly enhance question accuracy by fully
exploring the synonymous nature of the questions and the consistency of the
answers. |
---|---|
DOI: | 10.48550/arxiv.2304.13911 |