Automatic Engineering of Long Prompts
Large language models (LLMs) have demonstrated remarkable capabilities in solving complex open-domain tasks, guided by comprehensive instructions and demonstrations provided in the form of prompts. However, these prompts can be lengthy, often comprising hundreds of lines and thousands of tokens, and...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have demonstrated remarkable capabilities in
solving complex open-domain tasks, guided by comprehensive instructions and
demonstrations provided in the form of prompts. However, these prompts can be
lengthy, often comprising hundreds of lines and thousands of tokens, and their
design often requires considerable human effort. Recent research has explored
automatic prompt engineering for short prompts, typically consisting of one or
a few sentences. However, the automatic design of long prompts remains a
challenging problem due to its immense search space. In this paper, we
investigate the performance of greedy algorithms and genetic algorithms for
automatic long prompt engineering. We demonstrate that a simple greedy approach
with beam search outperforms other methods in terms of search efficiency.
Moreover, we introduce two novel techniques that utilize search history to
enhance the effectiveness of LLM-based mutation in our search algorithm. Our
results show that the proposed automatic long prompt engineering algorithm
achieves an average of 9.2% accuracy gain on eight tasks in Big Bench Hard,
highlighting the significance of automating prompt designs to fully harness the
capabilities of LLMs. |
---|---|
DOI: | 10.48550/arxiv.2311.10117 |