Insights into Pre-training via Simpler Synthetic Tasks
Pre-training produces representations that are effective for a wide range of downstream tasks, but it is still unclear what properties of pre-training are necessary for effective gains. Notably, recent work shows that even pre-training on synthetic tasks can achieve significant gains in downstream t...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wu, Yuhuai Li, Felix Liang, Percy |
description | Pre-training produces representations that are effective for a wide range of
downstream tasks, but it is still unclear what properties of pre-training are
necessary for effective gains. Notably, recent work shows that even
pre-training on synthetic tasks can achieve significant gains in downstream
tasks. In this work, we perform three experiments that iteratively simplify
pre-training and show that the simplifications still retain much of its gains.
First, building on prior work, we perform a systematic evaluation of three
existing synthetic pre-training methods on six downstream tasks. We find the
best synthetic pre-training method, LIME, attains an average of $67\%$ of the
benefits of natural pre-training. Second, to our surprise, we find that
pre-training on a simple and generic synthetic task defined by the Set function
achieves $65\%$ of the benefits, almost matching LIME. Third, we find that
$39\%$ of the benefits can be attained by using merely the parameter statistics
of synthetic pre-training. We release the source code at
https://github.com/felixzli/synthetic_pretraining. |
doi_str_mv | 10.48550/arxiv.2206.10139 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_10139</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_10139</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-76e1c69a337dad91035ffef94ac48e696ff9cf3b1e062ffe7cbb5b81c816d0a73</originalsourceid><addsrcrecordid>eNotj7FuwjAURb10qCgf0An_QIIdJ8_xWKFCkZCKRPboxbHhqeAi20Lw97S00xmudHQPY69SlHXbNGKO8UqXsqoElFJIZZ4ZrEOi_SEnTiF_8210RY5IgcKeXwj5jk7no4t8dwv54DJZ3mH6Si_syeMxuek_J6xbvneLj2LzuVov3jYFgjaFBictGFRKjzgaKVTjvfOmRlu3Dgx4b6xXg3QCqp9F22FohlbaVsIoUKsJm_1pH8f7c6QTxlv_G9A_AtQdN0FBCg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Insights into Pre-training via Simpler Synthetic Tasks</title><source>arXiv.org</source><creator>Wu, Yuhuai ; Li, Felix ; Liang, Percy</creator><creatorcontrib>Wu, Yuhuai ; Li, Felix ; Liang, Percy</creatorcontrib><description>Pre-training produces representations that are effective for a wide range of
downstream tasks, but it is still unclear what properties of pre-training are
necessary for effective gains. Notably, recent work shows that even
pre-training on synthetic tasks can achieve significant gains in downstream
tasks. In this work, we perform three experiments that iteratively simplify
pre-training and show that the simplifications still retain much of its gains.
First, building on prior work, we perform a systematic evaluation of three
existing synthetic pre-training methods on six downstream tasks. We find the
best synthetic pre-training method, LIME, attains an average of $67\%$ of the
benefits of natural pre-training. Second, to our surprise, we find that
pre-training on a simple and generic synthetic task defined by the Set function
achieves $65\%$ of the benefits, almost matching LIME. Third, we find that
$39\%$ of the benefits can be attained by using merely the parameter statistics
of synthetic pre-training. We release the source code at
https://github.com/felixzli/synthetic_pretraining.</description><identifier>DOI: 10.48550/arxiv.2206.10139</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2022-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.10139$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.10139$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wu, Yuhuai</creatorcontrib><creatorcontrib>Li, Felix</creatorcontrib><creatorcontrib>Liang, Percy</creatorcontrib><title>Insights into Pre-training via Simpler Synthetic Tasks</title><description>Pre-training produces representations that are effective for a wide range of
downstream tasks, but it is still unclear what properties of pre-training are
necessary for effective gains. Notably, recent work shows that even
pre-training on synthetic tasks can achieve significant gains in downstream
tasks. In this work, we perform three experiments that iteratively simplify
pre-training and show that the simplifications still retain much of its gains.
First, building on prior work, we perform a systematic evaluation of three
existing synthetic pre-training methods on six downstream tasks. We find the
best synthetic pre-training method, LIME, attains an average of $67\%$ of the
benefits of natural pre-training. Second, to our surprise, we find that
pre-training on a simple and generic synthetic task defined by the Set function
achieves $65\%$ of the benefits, almost matching LIME. Third, we find that
$39\%$ of the benefits can be attained by using merely the parameter statistics
of synthetic pre-training. We release the source code at
https://github.com/felixzli/synthetic_pretraining.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7FuwjAURb10qCgf0An_QIIdJ8_xWKFCkZCKRPboxbHhqeAi20Lw97S00xmudHQPY69SlHXbNGKO8UqXsqoElFJIZZ4ZrEOi_SEnTiF_8210RY5IgcKeXwj5jk7no4t8dwv54DJZ3mH6Si_syeMxuek_J6xbvneLj2LzuVov3jYFgjaFBictGFRKjzgaKVTjvfOmRlu3Dgx4b6xXg3QCqp9F22FohlbaVsIoUKsJm_1pH8f7c6QTxlv_G9A_AtQdN0FBCg</recordid><startdate>20220621</startdate><enddate>20220621</enddate><creator>Wu, Yuhuai</creator><creator>Li, Felix</creator><creator>Liang, Percy</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220621</creationdate><title>Insights into Pre-training via Simpler Synthetic Tasks</title><author>Wu, Yuhuai ; Li, Felix ; Liang, Percy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-76e1c69a337dad91035ffef94ac48e696ff9cf3b1e062ffe7cbb5b81c816d0a73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Yuhuai</creatorcontrib><creatorcontrib>Li, Felix</creatorcontrib><creatorcontrib>Liang, Percy</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Yuhuai</au><au>Li, Felix</au><au>Liang, Percy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Insights into Pre-training via Simpler Synthetic Tasks</atitle><date>2022-06-21</date><risdate>2022</risdate><abstract>Pre-training produces representations that are effective for a wide range of
downstream tasks, but it is still unclear what properties of pre-training are
necessary for effective gains. Notably, recent work shows that even
pre-training on synthetic tasks can achieve significant gains in downstream
tasks. In this work, we perform three experiments that iteratively simplify
pre-training and show that the simplifications still retain much of its gains.
First, building on prior work, we perform a systematic evaluation of three
existing synthetic pre-training methods on six downstream tasks. We find the
best synthetic pre-training method, LIME, attains an average of $67\%$ of the
benefits of natural pre-training. Second, to our surprise, we find that
pre-training on a simple and generic synthetic task defined by the Set function
achieves $65\%$ of the benefits, almost matching LIME. Third, we find that
$39\%$ of the benefits can be attained by using merely the parameter statistics
of synthetic pre-training. We release the source code at
https://github.com/felixzli/synthetic_pretraining.</abstract><doi>10.48550/arxiv.2206.10139</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2206.10139 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2206_10139 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning |
title | Insights into Pre-training via Simpler Synthetic Tasks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T06%3A20%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Insights%20into%20Pre-training%20via%20Simpler%20Synthetic%20Tasks&rft.au=Wu,%20Yuhuai&rft.date=2022-06-21&rft_id=info:doi/10.48550/arxiv.2206.10139&rft_dat=%3Carxiv_GOX%3E2206_10139%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |