SentenceMIM: A Latent Variable Language Model
SentenceMIM is a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (i.e., similar to VAE). Previous attempts to learn VAEs for language data faced challenges due to po...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | SentenceMIM is a probabilistic auto-encoder for language data, trained with
Mutual Information Machine (MIM) learning to provide a fixed length
representation of variable length language observations (i.e., similar to VAE).
Previous attempts to learn VAEs for language data faced challenges due to
posterior collapse. MIM learning encourages high mutual information between
observations and latent variables, and is robust against posterior collapse. As
such, it learns informative representations whose dimension can be an order of
magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss
has no hyper-parameters, simplifying optimization. We compare sentenceMIM with
VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction,
comparable to AEs, with a rich structured latent space, comparable to VAEs. The
structured latent representation is demonstrated with interpolation between
sentences of different lengths. We demonstrate the versatility of sentenceMIM
by utilizing a trained model for question-answering and transfer learning,
without fine-tuning, outperforming VAE and AE with similar architectures. |
---|---|
DOI: | 10.48550/arxiv.2003.02645 |