Large Language Models as Zero-shot Dialogue State Tracker through Function Calling
Large language models (LLMs) are increasingly prevalent in conversational systems due to their advanced understanding and generative capabilities in general contexts. However, their effectiveness in task-oriented dialogues (TOD), which requires not only response generation but also effective dialogu...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) are increasingly prevalent in conversational
systems due to their advanced understanding and generative capabilities in
general contexts. However, their effectiveness in task-oriented dialogues
(TOD), which requires not only response generation but also effective dialogue
state tracking (DST) within specific tasks and domains, remains less
satisfying. In this work, we propose a novel approach FnCTOD for solving DST
with LLMs through function calling. This method improves zero-shot DST,
allowing adaptation to diverse domains without extensive data collection or
model tuning. Our experimental results demonstrate that our approach achieves
exceptional performance with both modestly sized open-source and also
proprietary LLMs: with in-context prompting it enables various 7B or 13B
parameter models to surpass the previous state-of-the-art (SOTA) achieved by
ChatGPT, and improves ChatGPT's performance beating the SOTA by 5.6% average
joint goal accuracy (JGA). Individual model results for GPT-3.5 and GPT-4 are
boosted by 4.8% and 14%, respectively. We also show that by fine-tuning on a
small collection of diverse task-oriented dialogues, we can equip modestly
sized models, specifically a 13B parameter LLaMA2-Chat model, with
function-calling capabilities and DST performance comparable to ChatGPT while
maintaining their chat capabilities. We have made the code publicly available
at https://github.com/facebookresearch/FnCTOD |
---|---|
DOI: | 10.48550/arxiv.2402.10466 |