Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language

Absract This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the Russi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Doklady. Mathematics 2023-12, Vol.108 (Suppl 2), p.S494-S502
Hauptverfasser: Gorbacheva, T. E., Bondarenko, I. Y.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page S502
container_issue Suppl 2
container_start_page S494
container_title Doklady. Mathematics
container_volume 108
creator Gorbacheva, T. E.
Bondarenko, I. Y.
description Absract This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.
doi_str_mv 10.1134/S1064562423701636
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2985940780</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2985940780</sourcerecordid><originalsourceid>FETCH-LOGICAL-c268t-f8f4df57f82a9e51e6d13c2dad58f31e4f75753c3ea9e6117c280a757cd900ec3</originalsourceid><addsrcrecordid>eNp1kE9LAzEQxYMoWKsfwFvA82om_zZ7lKpVqFionpeQTNYtdbcmu4d-e1OqeBBPM8z7vTfwCLkEdg0g5M0KmJZKc8lFyUALfUQmoAQURmh-nPcsF3v9lJyltGZMKs7YhMxXNiBdRhyibbu2a2gf6B3ili5s14y2Qfrce9wk2nbU0tWuG95xaB1dJhx9X_xQ5-Qk2E3Ci-85JW8P96-zx2LxMn-a3S4Kx7UZimCC9EGVwXBboQLUHoTj3nplggCUoVSlEk5gljVA6bhhNt-crxhDJ6bk6pC7jf3niGmo1_0Yu_yy5pVRlWSlYZmCA-Vin1LEUG9j-2HjrgZW7_uq__SVPfzgSZntGoy_yf-bvgAzCmso</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2985940780</pqid></control><display><type>article</type><title>Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language</title><source>Springer Online Journals Complete</source><creator>Gorbacheva, T. E. ; Bondarenko, I. Y.</creator><creatorcontrib>Gorbacheva, T. E. ; Bondarenko, I. Y.</creatorcontrib><description>Absract This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.</description><identifier>ISSN: 1064-5624</identifier><identifier>EISSN: 1531-8362</identifier><identifier>DOI: 10.1134/S1064562423701636</identifier><language>eng</language><publisher>Moscow: Pleiades Publishing</publisher><subject>Context ; Mathematics ; Mathematics and Statistics ; Texts</subject><ispartof>Doklady. Mathematics, 2023-12, Vol.108 (Suppl 2), p.S494-S502</ispartof><rights>Pleiades Publishing, Ltd. 2023. ISSN 1064-5624, Doklady Mathematics, 2023, Vol. 108, Suppl. 2, pp. S494–S502. © Pleiades Publishing, Ltd., 2023. Russian Text © The Author(s), 2023, published in Doklady Rossiiskoi Akademii Nauk. Matematika, Informatika, Protsessy Upravleniya, 2023, Vol. 514, No. 2, pp. 375–384.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c268t-f8f4df57f82a9e51e6d13c2dad58f31e4f75753c3ea9e6117c280a757cd900ec3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1134/S1064562423701636$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1134/S1064562423701636$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Gorbacheva, T. E.</creatorcontrib><creatorcontrib>Bondarenko, I. Y.</creatorcontrib><title>Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language</title><title>Doklady. Mathematics</title><addtitle>Dokl. Math</addtitle><description>Absract This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.</description><subject>Context</subject><subject>Mathematics</subject><subject>Mathematics and Statistics</subject><subject>Texts</subject><issn>1064-5624</issn><issn>1531-8362</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp1kE9LAzEQxYMoWKsfwFvA82om_zZ7lKpVqFionpeQTNYtdbcmu4d-e1OqeBBPM8z7vTfwCLkEdg0g5M0KmJZKc8lFyUALfUQmoAQURmh-nPcsF3v9lJyltGZMKs7YhMxXNiBdRhyibbu2a2gf6B3ili5s14y2Qfrce9wk2nbU0tWuG95xaB1dJhx9X_xQ5-Qk2E3Ci-85JW8P96-zx2LxMn-a3S4Kx7UZimCC9EGVwXBboQLUHoTj3nplggCUoVSlEk5gljVA6bhhNt-crxhDJ6bk6pC7jf3niGmo1_0Yu_yy5pVRlWSlYZmCA-Vin1LEUG9j-2HjrgZW7_uq__SVPfzgSZntGoy_yf-bvgAzCmso</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Gorbacheva, T. E.</creator><creator>Bondarenko, I. Y.</creator><general>Pleiades Publishing</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20231201</creationdate><title>Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language</title><author>Gorbacheva, T. E. ; Bondarenko, I. Y.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c268t-f8f4df57f82a9e51e6d13c2dad58f31e4f75753c3ea9e6117c280a757cd900ec3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Context</topic><topic>Mathematics</topic><topic>Mathematics and Statistics</topic><topic>Texts</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gorbacheva, T. E.</creatorcontrib><creatorcontrib>Bondarenko, I. Y.</creatorcontrib><collection>CrossRef</collection><jtitle>Doklady. Mathematics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gorbacheva, T. E.</au><au>Bondarenko, I. Y.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language</atitle><jtitle>Doklady. Mathematics</jtitle><stitle>Dokl. Math</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>108</volume><issue>Suppl 2</issue><spage>S494</spage><epage>S502</epage><pages>S494-S502</pages><issn>1064-5624</issn><eissn>1531-8362</eissn><abstract>Absract This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.</abstract><cop>Moscow</cop><pub>Pleiades Publishing</pub><doi>10.1134/S1064562423701636</doi></addata></record>
fulltext fulltext
identifier ISSN: 1064-5624
ispartof Doklady. Mathematics, 2023-12, Vol.108 (Suppl 2), p.S494-S502
issn 1064-5624
1531-8362
language eng
recordid cdi_proquest_journals_2985940780
source Springer Online Journals Complete
subjects Context
Mathematics
Mathematics and Statistics
Texts
title Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T16%3A09%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Safe%20Pretraining%20of%20Deep%20Language%20Models%20in%20a%20Synthetic%20Pseudo-Language&rft.jtitle=Doklady.%20Mathematics&rft.au=Gorbacheva,%20T.%20E.&rft.date=2023-12-01&rft.volume=108&rft.issue=Suppl%202&rft.spage=S494&rft.epage=S502&rft.pages=S494-S502&rft.issn=1064-5624&rft.eissn=1531-8362&rft_id=info:doi/10.1134/S1064562423701636&rft_dat=%3Cproquest_cross%3E2985940780%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2985940780&rft_id=info:pmid/&rfr_iscdi=true