Adapting Task-Oriented Dialogue Models for Email Conversations
Intent detection is a key part of any Natural Language Understanding (NLU) system of a conversational assistant. Detecting the correct intent is essential yet difficult for email conversations where multiple directives and intents are present. In such settings, conversation context can become a key...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Intent detection is a key part of any Natural Language Understanding (NLU)
system of a conversational assistant. Detecting the correct intent is essential
yet difficult for email conversations where multiple directives and intents are
present. In such settings, conversation context can become a key disambiguating
factor for detecting the user's request from the assistant. One prominent way
of incorporating context is modeling past conversation history like
task-oriented dialogue models. However, the nature of email conversations (long
form) restricts direct usage of the latest advances in task-oriented dialogue
models. So in this paper, we provide an effective transfer learning framework
(EMToD) that allows the latest development in dialogue models to be adapted for
long-form conversations. We show that the proposed EMToD framework improves
intent detection performance over pre-trained language models by 45% and over
pre-trained dialogue models by 30% for task-oriented email conversations.
Additionally, the modular nature of the proposed framework allows plug-and-play
for any future developments in both pre-trained language and task-oriented
dialogue models. |
---|---|
DOI: | 10.48550/arxiv.2208.09439 |