Revealing Unobservables by Deep Learning: Generative Element Extraction Networks (GEEN)
Latent variable models are crucial in scientific research, where a key variable, such as effort, ability, and belief, is unobserved in the sample but needs to be identified. This paper proposes a novel method for estimating realizations of a latent variable $X^*$ in a random sample that contains its...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Latent variable models are crucial in scientific research, where a key
variable, such as effort, ability, and belief, is unobserved in the sample but
needs to be identified. This paper proposes a novel method for estimating
realizations of a latent variable $X^*$ in a random sample that contains its
multiple measurements. With the key assumption that the measurements are
independent conditional on $X^*$, we provide sufficient conditions under which
realizations of $X^*$ in the sample are locally unique in a class of
deviations, which allows us to identify realizations of $X^*$. To the best of
our knowledge, this paper is the first to provide such identification in
observation. We then use the Kullback-Leibler distance between the two
probability densities with and without the conditional independence as the loss
function to train a Generative Element Extraction Networks (GEEN) that maps
from the observed measurements to realizations of $X^*$ in the sample. The
simulation results imply that this proposed estimator works quite well and the
estimated values are highly correlated with realizations of $X^*$. Our
estimator can be applied to a large class of latent variable models and we
expect it will change how people deal with latent variables. |
---|---|
DOI: | 10.48550/arxiv.2210.01300 |