Self-Aware Feedback-Based Self-Learning in Large-Scale Conversational AI
Self-learning paradigms in large-scale conversational AI agents tend to leverage user feedback in bridging between what they say and what they mean. However, such learning, particularly in Markov-based query rewriting systems have far from addressed the impact of these models on future training wher...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Self-learning paradigms in large-scale conversational AI agents tend to
leverage user feedback in bridging between what they say and what they mean.
However, such learning, particularly in Markov-based query rewriting systems
have far from addressed the impact of these models on future training where
successive feedback is inevitably contingent on the rewrite itself, especially
in a continually updating environment. In this paper, we explore the
consequences of this inherent lack of self-awareness towards impairing the
model performance, ultimately resulting in both Type I and II errors over time.
To that end, we propose augmenting the Markov Graph construction with a
superposition-based adjacency matrix. Here, our method leverages an induced
stochasticity to reactively learn a locally-adaptive decision boundary based on
the performance of the individual rewrites in a bi-variate beta setting. We
also surface a data augmentation strategy that leverages template-based
generation in abridging complex conversation hierarchies of dialogs so as to
simplify the learning process. All in all, we demonstrate that our self-aware
model improves the overall PR-AUC by 27.45%, achieves a relative defect
reduction of up to 31.22%, and is able to adapt quicker to changes in global
preferences across a large number of customers. |
---|---|
DOI: | 10.48550/arxiv.2205.00029 |