Language as a Latent Sequence: deep latent variable models for semi-supervised paraphrase generation

This paper explores deep latent variable models for semi-supervised paraphrase generation, where the missing target pair for unlabelled data is modelled as a latent paraphrase sequence. We present a novel unsupervised model named variational sequence auto-encoding reconstruction (VSAR), which perfor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-09
Hauptverfasser: Yu, Jialin, Cristea, Alexandra I, Anoushka Harit, Sun, Zhongtian, Olanrewaju Tahir Aduragba, Shi, Lei, Noura Al Moubayed
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yu, Jialin
Cristea, Alexandra I
Anoushka Harit
Sun, Zhongtian
Olanrewaju Tahir Aduragba
Shi, Lei
Noura Al Moubayed
description This paper explores deep latent variable models for semi-supervised paraphrase generation, where the missing target pair for unlabelled data is modelled as a latent paraphrase sequence. We present a novel unsupervised model named variational sequence auto-encoding reconstruction (VSAR), which performs latent sequence inference given an observed text. To leverage information from text pairs, we additionally introduce a novel supervised model we call dual directional learning (DDL), which is designed to integrate with our proposed VSAR model. Combining VSAR with DDL (DDL+VSAR) enables us to conduct semi-supervised learning. Still, the combined model suffers from a cold-start problem. To further combat this issue, we propose an improved weight initialisation solution, leading to a novel two-stage training scheme we call knowledge-reinforced-learning (KRL). Our empirical evaluations suggest that the combined model yields competitive performance against the state-of-the-art supervised baselines on complete data. Furthermore, in scenarios where only a fraction of the labelled pairs are available, our combined model consistently outperforms the strong supervised model baseline (DDL) by a significant margin (p
doi_str_mv 10.48550/arxiv.2301.02275
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2301_02275</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2762490407</sourcerecordid><originalsourceid>FETCH-LOGICAL-a955-1e92a0dd97fa04ccea26a6e886e14dfbf395125e2a7a1f7933ea0b0e4b393f2d3</originalsourceid><addsrcrecordid>eNotkF9LwzAUxYMgOOY-gE8GfO5MbpKm8U2G_6Dgg3svt8vN7OjamrRDv71z8-nA4XA458fYjRRLXRgj7jF-N4clKCGXAsCaCzYDpWRWaIArtkhpJ4SA3IIxasZ8id12wi1xTBx5iSN1I_-gr4m6DT1wTzTw9uweMDZYt8T3vac28dBHnmjfZGkaKB6aRJ4PGHH4jJiIb6mjiGPTd9fsMmCbaPGvc7Z-flqvXrPy_eVt9Vhm6IzJJDlA4b2zAYXebAghx5yKIiepfaiDckaCIUCLMlinFKGoBelaORXAqzm7PdeeEFRDbPYYf6o_FNUJxTFxd04MsT8-TGO166fYHTdVYHPQTmhh1S9KMGLZ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2762490407</pqid></control><display><type>article</type><title>Language as a Latent Sequence: deep latent variable models for semi-supervised paraphrase generation</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Yu, Jialin ; Cristea, Alexandra I ; Anoushka Harit ; Sun, Zhongtian ; Olanrewaju Tahir Aduragba ; Shi, Lei ; Noura Al Moubayed</creator><creatorcontrib>Yu, Jialin ; Cristea, Alexandra I ; Anoushka Harit ; Sun, Zhongtian ; Olanrewaju Tahir Aduragba ; Shi, Lei ; Noura Al Moubayed</creatorcontrib><description>This paper explores deep latent variable models for semi-supervised paraphrase generation, where the missing target pair for unlabelled data is modelled as a latent paraphrase sequence. We present a novel unsupervised model named variational sequence auto-encoding reconstruction (VSAR), which performs latent sequence inference given an observed text. To leverage information from text pairs, we additionally introduce a novel supervised model we call dual directional learning (DDL), which is designed to integrate with our proposed VSAR model. Combining VSAR with DDL (DDL+VSAR) enables us to conduct semi-supervised learning. Still, the combined model suffers from a cold-start problem. To further combat this issue, we propose an improved weight initialisation solution, leading to a novel two-stage training scheme we call knowledge-reinforced-learning (KRL). Our empirical evaluations suggest that the combined model yields competitive performance against the state-of-the-art supervised baselines on complete data. Furthermore, in scenarios where only a fraction of the labelled pairs are available, our combined model consistently outperforms the strong supervised model baseline (DDL) by a significant margin (p &lt;.05; Wilcoxon test). Our code is publicly available at "https://github.com/jialin-yu/latent-sequence-paraphrase".</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2301.02275</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Computation and Language ; Computer Science - Learning ; Semi-supervised learning ; Training</subject><ispartof>arXiv.org, 2023-09</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2301.02275$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1016/j.aiopen.2023.05.001$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Yu, Jialin</creatorcontrib><creatorcontrib>Cristea, Alexandra I</creatorcontrib><creatorcontrib>Anoushka Harit</creatorcontrib><creatorcontrib>Sun, Zhongtian</creatorcontrib><creatorcontrib>Olanrewaju Tahir Aduragba</creatorcontrib><creatorcontrib>Shi, Lei</creatorcontrib><creatorcontrib>Noura Al Moubayed</creatorcontrib><title>Language as a Latent Sequence: deep latent variable models for semi-supervised paraphrase generation</title><title>arXiv.org</title><description>This paper explores deep latent variable models for semi-supervised paraphrase generation, where the missing target pair for unlabelled data is modelled as a latent paraphrase sequence. We present a novel unsupervised model named variational sequence auto-encoding reconstruction (VSAR), which performs latent sequence inference given an observed text. To leverage information from text pairs, we additionally introduce a novel supervised model we call dual directional learning (DDL), which is designed to integrate with our proposed VSAR model. Combining VSAR with DDL (DDL+VSAR) enables us to conduct semi-supervised learning. Still, the combined model suffers from a cold-start problem. To further combat this issue, we propose an improved weight initialisation solution, leading to a novel two-stage training scheme we call knowledge-reinforced-learning (KRL). Our empirical evaluations suggest that the combined model yields competitive performance against the state-of-the-art supervised baselines on complete data. Furthermore, in scenarios where only a fraction of the labelled pairs are available, our combined model consistently outperforms the strong supervised model baseline (DDL) by a significant margin (p &lt;.05; Wilcoxon test). Our code is publicly available at "https://github.com/jialin-yu/latent-sequence-paraphrase".</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Semi-supervised learning</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotkF9LwzAUxYMgOOY-gE8GfO5MbpKm8U2G_6Dgg3svt8vN7OjamrRDv71z8-nA4XA458fYjRRLXRgj7jF-N4clKCGXAsCaCzYDpWRWaIArtkhpJ4SA3IIxasZ8id12wi1xTBx5iSN1I_-gr4m6DT1wTzTw9uweMDZYt8T3vac28dBHnmjfZGkaKB6aRJ4PGHH4jJiIb6mjiGPTd9fsMmCbaPGvc7Z-flqvXrPy_eVt9Vhm6IzJJDlA4b2zAYXebAghx5yKIiepfaiDckaCIUCLMlinFKGoBelaORXAqzm7PdeeEFRDbPYYf6o_FNUJxTFxd04MsT8-TGO166fYHTdVYHPQTmhh1S9KMGLZ</recordid><startdate>20230908</startdate><enddate>20230908</enddate><creator>Yu, Jialin</creator><creator>Cristea, Alexandra I</creator><creator>Anoushka Harit</creator><creator>Sun, Zhongtian</creator><creator>Olanrewaju Tahir Aduragba</creator><creator>Shi, Lei</creator><creator>Noura Al Moubayed</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230908</creationdate><title>Language as a Latent Sequence: deep latent variable models for semi-supervised paraphrase generation</title><author>Yu, Jialin ; Cristea, Alexandra I ; Anoushka Harit ; Sun, Zhongtian ; Olanrewaju Tahir Aduragba ; Shi, Lei ; Noura Al Moubayed</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a955-1e92a0dd97fa04ccea26a6e886e14dfbf395125e2a7a1f7933ea0b0e4b393f2d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Semi-supervised learning</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Yu, Jialin</creatorcontrib><creatorcontrib>Cristea, Alexandra I</creatorcontrib><creatorcontrib>Anoushka Harit</creatorcontrib><creatorcontrib>Sun, Zhongtian</creatorcontrib><creatorcontrib>Olanrewaju Tahir Aduragba</creatorcontrib><creatorcontrib>Shi, Lei</creatorcontrib><creatorcontrib>Noura Al Moubayed</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yu, Jialin</au><au>Cristea, Alexandra I</au><au>Anoushka Harit</au><au>Sun, Zhongtian</au><au>Olanrewaju Tahir Aduragba</au><au>Shi, Lei</au><au>Noura Al Moubayed</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Language as a Latent Sequence: deep latent variable models for semi-supervised paraphrase generation</atitle><jtitle>arXiv.org</jtitle><date>2023-09-08</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>This paper explores deep latent variable models for semi-supervised paraphrase generation, where the missing target pair for unlabelled data is modelled as a latent paraphrase sequence. We present a novel unsupervised model named variational sequence auto-encoding reconstruction (VSAR), which performs latent sequence inference given an observed text. To leverage information from text pairs, we additionally introduce a novel supervised model we call dual directional learning (DDL), which is designed to integrate with our proposed VSAR model. Combining VSAR with DDL (DDL+VSAR) enables us to conduct semi-supervised learning. Still, the combined model suffers from a cold-start problem. To further combat this issue, we propose an improved weight initialisation solution, leading to a novel two-stage training scheme we call knowledge-reinforced-learning (KRL). Our empirical evaluations suggest that the combined model yields competitive performance against the state-of-the-art supervised baselines on complete data. Furthermore, in scenarios where only a fraction of the labelled pairs are available, our combined model consistently outperforms the strong supervised model baseline (DDL) by a significant margin (p &lt;.05; Wilcoxon test). Our code is publicly available at "https://github.com/jialin-yu/latent-sequence-paraphrase".</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2301.02275</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-09
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2301_02275
source arXiv.org; Free E- Journals
subjects Computer Science - Computation and Language
Computer Science - Learning
Semi-supervised learning
Training
title Language as a Latent Sequence: deep latent variable models for semi-supervised paraphrase generation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T22%3A22%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Language%20as%20a%20Latent%20Sequence:%20deep%20latent%20variable%20models%20for%20semi-supervised%20paraphrase%20generation&rft.jtitle=arXiv.org&rft.au=Yu,%20Jialin&rft.date=2023-09-08&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2301.02275&rft_dat=%3Cproquest_arxiv%3E2762490407%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2762490407&rft_id=info:pmid/&rfr_iscdi=true