SentenceMIM: A Latent Variable Language Model

SentenceMIM is a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (i.e., similar to VAE). Previous attempts to learn VAEs for language data faced challenges due to po...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Livne, Micha, Swersky, Kevin, Fleet, David J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Livne, Micha
Swersky, Kevin
Fleet, David J
description SentenceMIM is a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (i.e., similar to VAE). Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse. As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths. We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning, without fine-tuning, outperforming VAE and AE with similar architectures.
doi_str_mv 10.48550/arxiv.2003.02645
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2003_02645</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2003_02645</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-a9a95c3cc8c9b607b8acf648e36928faefde22ac5e09dc407c36644e129835a93</originalsourceid><addsrcrecordid>eNotzrkOgkAUheFpLIz6AFbyAuA4GzN2hLglEAuJLblcLoYE0eASfXvX6uRvTj7GxjMeKKs1n0L3qO-B4FwGXBil-8zfUXulFindpHMv8hJ419XbQ1dD0dC728MNDuSlp5KaIetV0Fxo9N8By5aLLF77yXa1iaPEBxNqHxw4jRLRoisMDwsLWBllSRonbAVUlSQEoCbuSlQ8RGmMUjQTzkoNTg7Y5Hf79ebnrj5C98w_7vzrli-9WTvx</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SentenceMIM: A Latent Variable Language Model</title><source>arXiv.org</source><creator>Livne, Micha ; Swersky, Kevin ; Fleet, David J</creator><creatorcontrib>Livne, Micha ; Swersky, Kevin ; Fleet, David J</creatorcontrib><description>SentenceMIM is a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (i.e., similar to VAE). Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse. As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths. We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning, without fine-tuning, outperforming VAE and AE with similar architectures.</description><identifier>DOI: 10.48550/arxiv.2003.02645</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2003.02645$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2003.02645$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Livne, Micha</creatorcontrib><creatorcontrib>Swersky, Kevin</creatorcontrib><creatorcontrib>Fleet, David J</creatorcontrib><title>SentenceMIM: A Latent Variable Language Model</title><description>SentenceMIM is a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (i.e., similar to VAE). Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse. As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths. We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning, without fine-tuning, outperforming VAE and AE with similar architectures.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrkOgkAUheFpLIz6AFbyAuA4GzN2hLglEAuJLblcLoYE0eASfXvX6uRvTj7GxjMeKKs1n0L3qO-B4FwGXBil-8zfUXulFindpHMv8hJ419XbQ1dD0dC728MNDuSlp5KaIetV0Fxo9N8By5aLLF77yXa1iaPEBxNqHxw4jRLRoisMDwsLWBllSRonbAVUlSQEoCbuSlQ8RGmMUjQTzkoNTg7Y5Hf79ebnrj5C98w_7vzrli-9WTvx</recordid><startdate>20200218</startdate><enddate>20200218</enddate><creator>Livne, Micha</creator><creator>Swersky, Kevin</creator><creator>Fleet, David J</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200218</creationdate><title>SentenceMIM: A Latent Variable Language Model</title><author>Livne, Micha ; Swersky, Kevin ; Fleet, David J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-a9a95c3cc8c9b607b8acf648e36928faefde22ac5e09dc407c36644e129835a93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Livne, Micha</creatorcontrib><creatorcontrib>Swersky, Kevin</creatorcontrib><creatorcontrib>Fleet, David J</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Livne, Micha</au><au>Swersky, Kevin</au><au>Fleet, David J</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SentenceMIM: A Latent Variable Language Model</atitle><date>2020-02-18</date><risdate>2020</risdate><abstract>SentenceMIM is a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (i.e., similar to VAE). Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse. As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths. We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning, without fine-tuning, outperforming VAE and AE with similar architectures.</abstract><doi>10.48550/arxiv.2003.02645</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2003.02645
ispartof
issn
language eng
recordid cdi_arxiv_primary_2003_02645
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Learning
Statistics - Machine Learning
title SentenceMIM: A Latent Variable Language Model
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T02%3A43%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SentenceMIM:%20A%20Latent%20Variable%20Language%20Model&rft.au=Livne,%20Micha&rft.date=2020-02-18&rft_id=info:doi/10.48550/arxiv.2003.02645&rft_dat=%3Carxiv_GOX%3E2003_02645%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true