PROPS: Probabilistic personalization of black-box sequence models
2018 IEEE International Conference on Big Data (Big Data), 4768-4774 We present PROPS, a lightweight transfer learning mechanism for sequential data. PROPS learns probabilistic perturbations around the predictions of one or more arbitrarily complex, pre-trained black box models (such as recurrent ne...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | 2018 IEEE International Conference on Big Data (Big Data),
4768-4774 We present PROPS, a lightweight transfer learning mechanism for sequential
data. PROPS learns probabilistic perturbations around the predictions of one or
more arbitrarily complex, pre-trained black box models (such as recurrent
neural networks). The technique pins the black-box prediction functions to
"source nodes" of a hidden Markov model (HMM), and uses the remaining nodes as
"perturbation nodes" for learning customized perturbations around those
predictions. In this paper, we describe the PROPS model, provide an algorithm
for online learning of its parameters, and demonstrate the consistency of this
estimation. We also explore the utility of PROPS in the context of personalized
language modeling. In particular, we construct a baseline language model by
training a LSTM on the entire Wikipedia corpus of 2.5 million articles (around
6.6 billion words), and then use PROPS to provide lightweight customization
into a personalized language model of President Donald J. Trump's tweeting. We
achieved good customization after only 2,000 additional words, and find that
the PROPS model, being fully probabilistic, provides insight into when
President Trump's speech departs from generic patterns in the Wikipedia corpus.
Python code (for both the PROPS training algorithm as well as experiment
reproducibility) is available at
https://github.com/cylance/perturbed-sequence-model. |
---|---|
DOI: | 10.48550/arxiv.1903.02013 |