Large Language Models are Good Multi-lingual Learners : When LLMs Meet Cross-lingual Prompts
With the advent of Large Language Models (LLMs), generating rule-based data for real-world applications has become more accessible. Due to the inherent ambiguity of natural language and the complexity of rule sets, especially in long contexts, LLMs often struggle to follow all specified rules, frequ...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the advent of Large Language Models (LLMs), generating rule-based data
for real-world applications has become more accessible. Due to the inherent
ambiguity of natural language and the complexity of rule sets, especially in
long contexts, LLMs often struggle to follow all specified rules, frequently
omitting at least one. To enhance the reasoning and understanding of LLMs on
long and complex contexts, we propose a novel prompting strategy Multi-Lingual
Prompt, namely MLPrompt, which automatically translates the error-prone rule
that an LLM struggles to follow into another language, thus drawing greater
attention to it. Experimental results on public datasets across various tasks
have shown MLPrompt can outperform state-of-the-art prompting methods such as
Chain of Thought, Tree of Thought, and Self-Consistency. Additionally, we
introduce a framework integrating MLPrompt with an auto-checking mechanism for
structured data generation, with a specific case study in text-to-MIP
instances. Further, we extend the proposed framework for text-to-SQL to
demonstrate its generation ability towards structured data synthesis. |
---|---|
DOI: | 10.48550/arxiv.2409.11056 |