Combining Induction and Transduction for Abstract Reasoning

When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC by training neural models for induction (inferring...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Wen-Ding, Hu, Keya, Larsen, Carter, Wu, Yuqing, Alford, Simon, Woo, Caleb, Dunn, Spencer M, Tang, Hao, Naim, Michelangelo, Nguyen, Dat, Zheng, Wei-Long, Tavares, Zenna, Pu, Yewen, Ellis, Kevin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Li, Wen-Ding
Hu, Keya
Larsen, Carter
Wu, Yuqing
Alford, Simon
Woo, Caleb
Dunn, Spencer M
Tang, Hao
Naim, Michelangelo
Nguyen, Dat
Zheng, Wei-Long
Tavares, Zenna
Pu, Yewen
Ellis, Kevin
description When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC by training neural models for induction (inferring latent functions) and transduction (directly predicting the test output for a given test input). We train on synthetically generated variations of Python programs that solve ARC training tasks. We find inductive and transductive models solve different kinds of test problems, despite having the same training problems and sharing the same neural architecture: Inductive program synthesis excels at precise computations, and at composing multiple concepts, while transduction succeeds on fuzzier perceptual concepts. Ensembling them approaches human-level performance on ARC.
doi_str_mv 10.48550/arxiv.2411.02272
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_02272</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_02272</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_022723</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DMwMjI34mSwds7PTcrMy8xLV_DMSylNLsnMz1NIzEtRCClKzCuGCaTlFyk4JhWXFCUmlygEpSYW54N08DCwpiXmFKfyQmluBnk31xBnD12wNfEFRZm5iUWV8SDr4sHWGRNWAQDalzUD</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Combining Induction and Transduction for Abstract Reasoning</title><source>arXiv.org</source><creator>Li, Wen-Ding ; Hu, Keya ; Larsen, Carter ; Wu, Yuqing ; Alford, Simon ; Woo, Caleb ; Dunn, Spencer M ; Tang, Hao ; Naim, Michelangelo ; Nguyen, Dat ; Zheng, Wei-Long ; Tavares, Zenna ; Pu, Yewen ; Ellis, Kevin</creator><creatorcontrib>Li, Wen-Ding ; Hu, Keya ; Larsen, Carter ; Wu, Yuqing ; Alford, Simon ; Woo, Caleb ; Dunn, Spencer M ; Tang, Hao ; Naim, Michelangelo ; Nguyen, Dat ; Zheng, Wei-Long ; Tavares, Zenna ; Pu, Yewen ; Ellis, Kevin</creatorcontrib><description>When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC by training neural models for induction (inferring latent functions) and transduction (directly predicting the test output for a given test input). We train on synthetically generated variations of Python programs that solve ARC training tasks. We find inductive and transductive models solve different kinds of test problems, despite having the same training problems and sharing the same neural architecture: Inductive program synthesis excels at precise computations, and at composing multiple concepts, while transduction succeeds on fuzzier perceptual concepts. Ensembling them approaches human-level performance on ARC.</description><identifier>DOI: 10.48550/arxiv.2411.02272</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-11</creationdate><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.02272$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.02272$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Wen-Ding</creatorcontrib><creatorcontrib>Hu, Keya</creatorcontrib><creatorcontrib>Larsen, Carter</creatorcontrib><creatorcontrib>Wu, Yuqing</creatorcontrib><creatorcontrib>Alford, Simon</creatorcontrib><creatorcontrib>Woo, Caleb</creatorcontrib><creatorcontrib>Dunn, Spencer M</creatorcontrib><creatorcontrib>Tang, Hao</creatorcontrib><creatorcontrib>Naim, Michelangelo</creatorcontrib><creatorcontrib>Nguyen, Dat</creatorcontrib><creatorcontrib>Zheng, Wei-Long</creatorcontrib><creatorcontrib>Tavares, Zenna</creatorcontrib><creatorcontrib>Pu, Yewen</creatorcontrib><creatorcontrib>Ellis, Kevin</creatorcontrib><title>Combining Induction and Transduction for Abstract Reasoning</title><description>When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC by training neural models for induction (inferring latent functions) and transduction (directly predicting the test output for a given test input). We train on synthetically generated variations of Python programs that solve ARC training tasks. We find inductive and transductive models solve different kinds of test problems, despite having the same training problems and sharing the same neural architecture: Inductive program synthesis excels at precise computations, and at composing multiple concepts, while transduction succeeds on fuzzier perceptual concepts. Ensembling them approaches human-level performance on ARC.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DMwMjI34mSwds7PTcrMy8xLV_DMSylNLsnMz1NIzEtRCClKzCuGCaTlFyk4JhWXFCUmlygEpSYW54N08DCwpiXmFKfyQmluBnk31xBnD12wNfEFRZm5iUWV8SDr4sHWGRNWAQDalzUD</recordid><startdate>20241104</startdate><enddate>20241104</enddate><creator>Li, Wen-Ding</creator><creator>Hu, Keya</creator><creator>Larsen, Carter</creator><creator>Wu, Yuqing</creator><creator>Alford, Simon</creator><creator>Woo, Caleb</creator><creator>Dunn, Spencer M</creator><creator>Tang, Hao</creator><creator>Naim, Michelangelo</creator><creator>Nguyen, Dat</creator><creator>Zheng, Wei-Long</creator><creator>Tavares, Zenna</creator><creator>Pu, Yewen</creator><creator>Ellis, Kevin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241104</creationdate><title>Combining Induction and Transduction for Abstract Reasoning</title><author>Li, Wen-Ding ; Hu, Keya ; Larsen, Carter ; Wu, Yuqing ; Alford, Simon ; Woo, Caleb ; Dunn, Spencer M ; Tang, Hao ; Naim, Michelangelo ; Nguyen, Dat ; Zheng, Wei-Long ; Tavares, Zenna ; Pu, Yewen ; Ellis, Kevin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_022723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Wen-Ding</creatorcontrib><creatorcontrib>Hu, Keya</creatorcontrib><creatorcontrib>Larsen, Carter</creatorcontrib><creatorcontrib>Wu, Yuqing</creatorcontrib><creatorcontrib>Alford, Simon</creatorcontrib><creatorcontrib>Woo, Caleb</creatorcontrib><creatorcontrib>Dunn, Spencer M</creatorcontrib><creatorcontrib>Tang, Hao</creatorcontrib><creatorcontrib>Naim, Michelangelo</creatorcontrib><creatorcontrib>Nguyen, Dat</creatorcontrib><creatorcontrib>Zheng, Wei-Long</creatorcontrib><creatorcontrib>Tavares, Zenna</creatorcontrib><creatorcontrib>Pu, Yewen</creatorcontrib><creatorcontrib>Ellis, Kevin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Wen-Ding</au><au>Hu, Keya</au><au>Larsen, Carter</au><au>Wu, Yuqing</au><au>Alford, Simon</au><au>Woo, Caleb</au><au>Dunn, Spencer M</au><au>Tang, Hao</au><au>Naim, Michelangelo</au><au>Nguyen, Dat</au><au>Zheng, Wei-Long</au><au>Tavares, Zenna</au><au>Pu, Yewen</au><au>Ellis, Kevin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Combining Induction and Transduction for Abstract Reasoning</atitle><date>2024-11-04</date><risdate>2024</risdate><abstract>When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC by training neural models for induction (inferring latent functions) and transduction (directly predicting the test output for a given test input). We train on synthetically generated variations of Python programs that solve ARC training tasks. We find inductive and transductive models solve different kinds of test problems, despite having the same training problems and sharing the same neural architecture: Inductive program synthesis excels at precise computations, and at composing multiple concepts, while transduction succeeds on fuzzier perceptual concepts. Ensembling them approaches human-level performance on ARC.</abstract><doi>10.48550/arxiv.2411.02272</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2411.02272
ispartof
issn
language eng
recordid cdi_arxiv_primary_2411_02272
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Combining Induction and Transduction for Abstract Reasoning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T05%3A41%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Combining%20Induction%20and%20Transduction%20for%20Abstract%20Reasoning&rft.au=Li,%20Wen-Ding&rft.date=2024-11-04&rft_id=info:doi/10.48550/arxiv.2411.02272&rft_dat=%3Carxiv_GOX%3E2411_02272%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true