A Deep Generative Model for Code-Switched Text

Code-switching, the interleaving of two or more languages within a sentence or discourse is pervasive in multilingual societies. Accurate language models for code-switched text are critical for NLP tasks. State-of-the-art data-intensive neural language models are difficult to train well from scarce...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Samanta, Bidisha, Reddy, Sharmila, Jagirdar, Hussain, Ganguly, Niloy, Chakrabarti, Soumen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Samanta, Bidisha
Reddy, Sharmila
Jagirdar, Hussain
Ganguly, Niloy
Chakrabarti, Soumen
description Code-switching, the interleaving of two or more languages within a sentence or discourse is pervasive in multilingual societies. Accurate language models for code-switched text are critical for NLP tasks. State-of-the-art data-intensive neural language models are difficult to train well from scarce language-labeled code-switched text. A potential solution is to use deep generative models to synthesize large volumes of realistic code-switched text. Although generative adversarial networks and variational autoencoders can synthesize plausible monolingual text from continuous latent space, they cannot adequately address code-switched text, owing to their informal style and complex interplay between the constituent languages. We introduce VACS, a novel variational autoencoder architecture specifically tailored to code-switching phenomena. VACS encodes to and decodes from a two-level hierarchical representation, which models syntactic contextual signals in the lower level, and language switching signals in the upper layer. Sampling representations from the prior and decoding them produced well-formed, diverse code-switched sentences. Extensive experiments show that using synthetic code-switched text with natural monolingual data results in significant (33.06%) drop in perplexity.
doi_str_mv 10.48550/arxiv.1906.08972
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1906_08972</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1906_08972</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-24017cdc5eb2200610655062549900692e444b7a50e08012a5782f3a46f1fc663</originalsourceid><addsrcrecordid>eNotjrkOwjAQRN1QIOADqPAPJKw3PuIShVMCUZA-MslaROKSiTj-nnBUM9O8eYwNBcQyVQrGLjzreyws6BhSa7DL4gmfEl35gs4UXFPfiW8uFR25vwSetS3aPeqmPFDFc3o2fdbx7nijwT97LJ_P8mwZrbeLVTZZR04bjFCCMGVVKtojAmgBun3XqKS17bRIUsq9cQoIUhDolEnRJ05qL3ypddJjox_2K1xcQ31y4VV8xIuvePIGZCQ6SA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Deep Generative Model for Code-Switched Text</title><source>arXiv.org</source><creator>Samanta, Bidisha ; Reddy, Sharmila ; Jagirdar, Hussain ; Ganguly, Niloy ; Chakrabarti, Soumen</creator><creatorcontrib>Samanta, Bidisha ; Reddy, Sharmila ; Jagirdar, Hussain ; Ganguly, Niloy ; Chakrabarti, Soumen</creatorcontrib><description>Code-switching, the interleaving of two or more languages within a sentence or discourse is pervasive in multilingual societies. Accurate language models for code-switched text are critical for NLP tasks. State-of-the-art data-intensive neural language models are difficult to train well from scarce language-labeled code-switched text. A potential solution is to use deep generative models to synthesize large volumes of realistic code-switched text. Although generative adversarial networks and variational autoencoders can synthesize plausible monolingual text from continuous latent space, they cannot adequately address code-switched text, owing to their informal style and complex interplay between the constituent languages. We introduce VACS, a novel variational autoencoder architecture specifically tailored to code-switching phenomena. VACS encodes to and decodes from a two-level hierarchical representation, which models syntactic contextual signals in the lower level, and language switching signals in the upper layer. Sampling representations from the prior and decoding them produced well-formed, diverse code-switched sentences. Extensive experiments show that using synthetic code-switched text with natural monolingual data results in significant (33.06%) drop in perplexity.</description><identifier>DOI: 10.48550/arxiv.1906.08972</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2019-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1906.08972$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1906.08972$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Samanta, Bidisha</creatorcontrib><creatorcontrib>Reddy, Sharmila</creatorcontrib><creatorcontrib>Jagirdar, Hussain</creatorcontrib><creatorcontrib>Ganguly, Niloy</creatorcontrib><creatorcontrib>Chakrabarti, Soumen</creatorcontrib><title>A Deep Generative Model for Code-Switched Text</title><description>Code-switching, the interleaving of two or more languages within a sentence or discourse is pervasive in multilingual societies. Accurate language models for code-switched text are critical for NLP tasks. State-of-the-art data-intensive neural language models are difficult to train well from scarce language-labeled code-switched text. A potential solution is to use deep generative models to synthesize large volumes of realistic code-switched text. Although generative adversarial networks and variational autoencoders can synthesize plausible monolingual text from continuous latent space, they cannot adequately address code-switched text, owing to their informal style and complex interplay between the constituent languages. We introduce VACS, a novel variational autoencoder architecture specifically tailored to code-switching phenomena. VACS encodes to and decodes from a two-level hierarchical representation, which models syntactic contextual signals in the lower level, and language switching signals in the upper layer. Sampling representations from the prior and decoding them produced well-formed, diverse code-switched sentences. Extensive experiments show that using synthetic code-switched text with natural monolingual data results in significant (33.06%) drop in perplexity.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjrkOwjAQRN1QIOADqPAPJKw3PuIShVMCUZA-MslaROKSiTj-nnBUM9O8eYwNBcQyVQrGLjzreyws6BhSa7DL4gmfEl35gs4UXFPfiW8uFR25vwSetS3aPeqmPFDFc3o2fdbx7nijwT97LJ_P8mwZrbeLVTZZR04bjFCCMGVVKtojAmgBun3XqKS17bRIUsq9cQoIUhDolEnRJ05qL3ypddJjox_2K1xcQ31y4VV8xIuvePIGZCQ6SA</recordid><startdate>20190621</startdate><enddate>20190621</enddate><creator>Samanta, Bidisha</creator><creator>Reddy, Sharmila</creator><creator>Jagirdar, Hussain</creator><creator>Ganguly, Niloy</creator><creator>Chakrabarti, Soumen</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190621</creationdate><title>A Deep Generative Model for Code-Switched Text</title><author>Samanta, Bidisha ; Reddy, Sharmila ; Jagirdar, Hussain ; Ganguly, Niloy ; Chakrabarti, Soumen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-24017cdc5eb2200610655062549900692e444b7a50e08012a5782f3a46f1fc663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Samanta, Bidisha</creatorcontrib><creatorcontrib>Reddy, Sharmila</creatorcontrib><creatorcontrib>Jagirdar, Hussain</creatorcontrib><creatorcontrib>Ganguly, Niloy</creatorcontrib><creatorcontrib>Chakrabarti, Soumen</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Samanta, Bidisha</au><au>Reddy, Sharmila</au><au>Jagirdar, Hussain</au><au>Ganguly, Niloy</au><au>Chakrabarti, Soumen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Deep Generative Model for Code-Switched Text</atitle><date>2019-06-21</date><risdate>2019</risdate><abstract>Code-switching, the interleaving of two or more languages within a sentence or discourse is pervasive in multilingual societies. Accurate language models for code-switched text are critical for NLP tasks. State-of-the-art data-intensive neural language models are difficult to train well from scarce language-labeled code-switched text. A potential solution is to use deep generative models to synthesize large volumes of realistic code-switched text. Although generative adversarial networks and variational autoencoders can synthesize plausible monolingual text from continuous latent space, they cannot adequately address code-switched text, owing to their informal style and complex interplay between the constituent languages. We introduce VACS, a novel variational autoencoder architecture specifically tailored to code-switching phenomena. VACS encodes to and decodes from a two-level hierarchical representation, which models syntactic contextual signals in the lower level, and language switching signals in the upper layer. Sampling representations from the prior and decoding them produced well-formed, diverse code-switched sentences. Extensive experiments show that using synthetic code-switched text with natural monolingual data results in significant (33.06%) drop in perplexity.</abstract><doi>10.48550/arxiv.1906.08972</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1906.08972
ispartof
issn
language eng
recordid cdi_arxiv_primary_1906_08972
source arXiv.org
subjects Computer Science - Computation and Language
title A Deep Generative Model for Code-Switched Text
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T15%3A45%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Deep%20Generative%20Model%20for%20Code-Switched%20Text&rft.au=Samanta,%20Bidisha&rft.date=2019-06-21&rft_id=info:doi/10.48550/arxiv.1906.08972&rft_dat=%3Carxiv_GOX%3E1906_08972%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true