Headless Language Models: Learning without Predicting with Contrastive Weight Tying
Self-supervised pre-training of language models usually consists in predicting probability distributions over extensive token vocabularies. In this study, we propose an innovative method that shifts away from probability prediction and instead focuses on reconstructing input embeddings in a contrast...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Godey, Nathan de la Clergerie, Éric Sagot, Benoît |
description | Self-supervised pre-training of language models usually consists in
predicting probability distributions over extensive token vocabularies. In this
study, we propose an innovative method that shifts away from probability
prediction and instead focuses on reconstructing input embeddings in a
contrastive fashion via Constrastive Weight Tying (CWT). We apply this approach
to pretrain Headless Language Models in both monolingual and multilingual
contexts. Our method offers practical advantages, substantially reducing
training computational requirements by up to 20 times, while simultaneously
enhancing downstream performance and data efficiency. We observe a significant
+1.6 GLUE score increase and a notable +2.7 LAMBADA accuracy improvement
compared to classical LMs within similar compute budgets. |
doi_str_mv | 10.48550/arxiv.2309.08351 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_08351</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_08351</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-5582bbccd534abbddd9ae26f34449a2c74762c2eabe2fbc42f21544878cf96a43</originalsourceid><addsrcrecordid>eNo1z81KxDAUBeBsXMjoA7iavEBrm582dSdFHaGiMAWX5Sa56QRqK0lmdN5eHXV14Bw48BFyVRa5UFIW1xA-_SFnvGjyQnFZnpPtBsFOGCPtYB73MCJ9WixO8YZ2CGH280g_fNot-0RfAlpv0n9F22VOAWLyB6Sv6Mddov3xe70gZw6miJd_uSL9_V3fbrLu-eGxve0yqOoyk1IxrY2xkgvQ2lrbALLKcSFEA8zUoq6YYQgamdNGMMdKKYSqlXFNBYKvyPr39qQa3oN_g3AcfnTDSce_ALQ5S68</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Headless Language Models: Learning without Predicting with Contrastive Weight Tying</title><source>arXiv.org</source><creator>Godey, Nathan ; de la Clergerie, Éric ; Sagot, Benoît</creator><creatorcontrib>Godey, Nathan ; de la Clergerie, Éric ; Sagot, Benoît</creatorcontrib><description>Self-supervised pre-training of language models usually consists in
predicting probability distributions over extensive token vocabularies. In this
study, we propose an innovative method that shifts away from probability
prediction and instead focuses on reconstructing input embeddings in a
contrastive fashion via Constrastive Weight Tying (CWT). We apply this approach
to pretrain Headless Language Models in both monolingual and multilingual
contexts. Our method offers practical advantages, substantially reducing
training computational requirements by up to 20 times, while simultaneously
enhancing downstream performance and data efficiency. We observe a significant
+1.6 GLUE score increase and a notable +2.7 LAMBADA accuracy improvement
compared to classical LMs within similar compute budgets.</description><identifier>DOI: 10.48550/arxiv.2309.08351</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-09</creationdate><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.08351$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.08351$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Godey, Nathan</creatorcontrib><creatorcontrib>de la Clergerie, Éric</creatorcontrib><creatorcontrib>Sagot, Benoît</creatorcontrib><title>Headless Language Models: Learning without Predicting with Contrastive Weight Tying</title><description>Self-supervised pre-training of language models usually consists in
predicting probability distributions over extensive token vocabularies. In this
study, we propose an innovative method that shifts away from probability
prediction and instead focuses on reconstructing input embeddings in a
contrastive fashion via Constrastive Weight Tying (CWT). We apply this approach
to pretrain Headless Language Models in both monolingual and multilingual
contexts. Our method offers practical advantages, substantially reducing
training computational requirements by up to 20 times, while simultaneously
enhancing downstream performance and data efficiency. We observe a significant
+1.6 GLUE score increase and a notable +2.7 LAMBADA accuracy improvement
compared to classical LMs within similar compute budgets.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1z81KxDAUBeBsXMjoA7iavEBrm582dSdFHaGiMAWX5Sa56QRqK0lmdN5eHXV14Bw48BFyVRa5UFIW1xA-_SFnvGjyQnFZnpPtBsFOGCPtYB73MCJ9WixO8YZ2CGH280g_fNot-0RfAlpv0n9F22VOAWLyB6Sv6Mddov3xe70gZw6miJd_uSL9_V3fbrLu-eGxve0yqOoyk1IxrY2xkgvQ2lrbALLKcSFEA8zUoq6YYQgamdNGMMdKKYSqlXFNBYKvyPr39qQa3oN_g3AcfnTDSce_ALQ5S68</recordid><startdate>20230915</startdate><enddate>20230915</enddate><creator>Godey, Nathan</creator><creator>de la Clergerie, Éric</creator><creator>Sagot, Benoît</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230915</creationdate><title>Headless Language Models: Learning without Predicting with Contrastive Weight Tying</title><author>Godey, Nathan ; de la Clergerie, Éric ; Sagot, Benoît</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-5582bbccd534abbddd9ae26f34449a2c74762c2eabe2fbc42f21544878cf96a43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Godey, Nathan</creatorcontrib><creatorcontrib>de la Clergerie, Éric</creatorcontrib><creatorcontrib>Sagot, Benoît</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Godey, Nathan</au><au>de la Clergerie, Éric</au><au>Sagot, Benoît</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Headless Language Models: Learning without Predicting with Contrastive Weight Tying</atitle><date>2023-09-15</date><risdate>2023</risdate><abstract>Self-supervised pre-training of language models usually consists in
predicting probability distributions over extensive token vocabularies. In this
study, we propose an innovative method that shifts away from probability
prediction and instead focuses on reconstructing input embeddings in a
contrastive fashion via Constrastive Weight Tying (CWT). We apply this approach
to pretrain Headless Language Models in both monolingual and multilingual
contexts. Our method offers practical advantages, substantially reducing
training computational requirements by up to 20 times, while simultaneously
enhancing downstream performance and data efficiency. We observe a significant
+1.6 GLUE score increase and a notable +2.7 LAMBADA accuracy improvement
compared to classical LMs within similar compute budgets.</abstract><doi>10.48550/arxiv.2309.08351</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2309.08351 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2309_08351 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | Headless Language Models: Learning without Predicting with Contrastive Weight Tying |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-15T13%3A41%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Headless%20Language%20Models:%20Learning%20without%20Predicting%20with%20Contrastive%20Weight%20Tying&rft.au=Godey,%20Nathan&rft.date=2023-09-15&rft_id=info:doi/10.48550/arxiv.2309.08351&rft_dat=%3Carxiv_GOX%3E2309_08351%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |