ItD: Large Language Models Can Teach Themselves Induction through Deduction
Although Large Language Models (LLMs) are showing impressive performance on a wide range of Natural Language Processing tasks, researchers have found that they still have limited ability to conduct induction. Recent works mainly adopt ``post processes'' paradigms to improve the performance...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-03 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Sun, Wangtao Xu, Haotian Yu, Xuanqing Chen, Pei He, Shizhu Zhao, Jun Liu, Kang |
description | Although Large Language Models (LLMs) are showing impressive performance on a wide range of Natural Language Processing tasks, researchers have found that they still have limited ability to conduct induction. Recent works mainly adopt ``post processes'' paradigms to improve the performance of LLMs on induction (e.g., the hypothesis search & refinement methods), but their performance is still constrained by the inherent inductive capability of the LLMs. In this paper, we propose a novel framework, Induction through Deduction (ItD), to enable the LLMs to teach themselves induction through deduction. The ItD framework is composed of two main components: a Deductive Data Generation module to generate induction data and a Naive Bayesian Induction module to optimize the fine-tuning and decoding of LLMs. Our empirical results showcase the effectiveness of ItD on two induction benchmarks, achieving relative performance improvement of 36% and 10% compared with previous state-of-the-art, respectively. Our ablation study verifies the effectiveness of two key modules of ItD. We also verify the effectiveness of ItD across different LLMs and deductors. The data and code of this paper can be found at https://anonymous.4open.science/r/ItD-E844. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2955959879</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2955959879</sourcerecordid><originalsourceid>FETCH-proquest_journals_29559598793</originalsourceid><addsrcrecordid>eNqNikEOgjAURBsTE4lyhyauSbC1Qt2CRqLu2JMGvlSCrfa3nl8WHMDNzMu8WZCIcb5L8j1jKxIjDmmaskPGhOARuVa-PNKbcj1MafqgJrjbDkakhTK0BtVqWmt4IYxfQFqZLrT-aQ312tnQa1rCvGzI8qFGhHjuNdmeT3VxSd7OfgKgbwYbnJlUw6QQUsg8k_y_1w-EdjyK</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2955959879</pqid></control><display><type>article</type><title>ItD: Large Language Models Can Teach Themselves Induction through Deduction</title><source>Free E- Journals</source><creator>Sun, Wangtao ; Xu, Haotian ; Yu, Xuanqing ; Chen, Pei ; He, Shizhu ; Zhao, Jun ; Liu, Kang</creator><creatorcontrib>Sun, Wangtao ; Xu, Haotian ; Yu, Xuanqing ; Chen, Pei ; He, Shizhu ; Zhao, Jun ; Liu, Kang</creatorcontrib><description>Although Large Language Models (LLMs) are showing impressive performance on a wide range of Natural Language Processing tasks, researchers have found that they still have limited ability to conduct induction. Recent works mainly adopt ``post processes'' paradigms to improve the performance of LLMs on induction (e.g., the hypothesis search & refinement methods), but their performance is still constrained by the inherent inductive capability of the LLMs. In this paper, we propose a novel framework, Induction through Deduction (ItD), to enable the LLMs to teach themselves induction through deduction. The ItD framework is composed of two main components: a Deductive Data Generation module to generate induction data and a Naive Bayesian Induction module to optimize the fine-tuning and decoding of LLMs. Our empirical results showcase the effectiveness of ItD on two induction benchmarks, achieving relative performance improvement of 36% and 10% compared with previous state-of-the-art, respectively. Our ablation study verifies the effectiveness of two key modules of ItD. We also verify the effectiveness of ItD across different LLMs and deductors. The data and code of this paper can be found at https://anonymous.4open.science/r/ItD-E844.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Decoding ; Effectiveness ; Large language models ; Modules ; Natural language processing ; Performance enhancement</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Sun, Wangtao</creatorcontrib><creatorcontrib>Xu, Haotian</creatorcontrib><creatorcontrib>Yu, Xuanqing</creatorcontrib><creatorcontrib>Chen, Pei</creatorcontrib><creatorcontrib>He, Shizhu</creatorcontrib><creatorcontrib>Zhao, Jun</creatorcontrib><creatorcontrib>Liu, Kang</creatorcontrib><title>ItD: Large Language Models Can Teach Themselves Induction through Deduction</title><title>arXiv.org</title><description>Although Large Language Models (LLMs) are showing impressive performance on a wide range of Natural Language Processing tasks, researchers have found that they still have limited ability to conduct induction. Recent works mainly adopt ``post processes'' paradigms to improve the performance of LLMs on induction (e.g., the hypothesis search & refinement methods), but their performance is still constrained by the inherent inductive capability of the LLMs. In this paper, we propose a novel framework, Induction through Deduction (ItD), to enable the LLMs to teach themselves induction through deduction. The ItD framework is composed of two main components: a Deductive Data Generation module to generate induction data and a Naive Bayesian Induction module to optimize the fine-tuning and decoding of LLMs. Our empirical results showcase the effectiveness of ItD on two induction benchmarks, achieving relative performance improvement of 36% and 10% compared with previous state-of-the-art, respectively. Our ablation study verifies the effectiveness of two key modules of ItD. We also verify the effectiveness of ItD across different LLMs and deductors. The data and code of this paper can be found at https://anonymous.4open.science/r/ItD-E844.</description><subject>Ablation</subject><subject>Decoding</subject><subject>Effectiveness</subject><subject>Large language models</subject><subject>Modules</subject><subject>Natural language processing</subject><subject>Performance enhancement</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNikEOgjAURBsTE4lyhyauSbC1Qt2CRqLu2JMGvlSCrfa3nl8WHMDNzMu8WZCIcb5L8j1jKxIjDmmaskPGhOARuVa-PNKbcj1MafqgJrjbDkakhTK0BtVqWmt4IYxfQFqZLrT-aQ312tnQa1rCvGzI8qFGhHjuNdmeT3VxSd7OfgKgbwYbnJlUw6QQUsg8k_y_1w-EdjyK</recordid><startdate>20240309</startdate><enddate>20240309</enddate><creator>Sun, Wangtao</creator><creator>Xu, Haotian</creator><creator>Yu, Xuanqing</creator><creator>Chen, Pei</creator><creator>He, Shizhu</creator><creator>Zhao, Jun</creator><creator>Liu, Kang</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240309</creationdate><title>ItD: Large Language Models Can Teach Themselves Induction through Deduction</title><author>Sun, Wangtao ; Xu, Haotian ; Yu, Xuanqing ; Chen, Pei ; He, Shizhu ; Zhao, Jun ; Liu, Kang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29559598793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Ablation</topic><topic>Decoding</topic><topic>Effectiveness</topic><topic>Large language models</topic><topic>Modules</topic><topic>Natural language processing</topic><topic>Performance enhancement</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Wangtao</creatorcontrib><creatorcontrib>Xu, Haotian</creatorcontrib><creatorcontrib>Yu, Xuanqing</creatorcontrib><creatorcontrib>Chen, Pei</creatorcontrib><creatorcontrib>He, Shizhu</creatorcontrib><creatorcontrib>Zhao, Jun</creatorcontrib><creatorcontrib>Liu, Kang</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sun, Wangtao</au><au>Xu, Haotian</au><au>Yu, Xuanqing</au><au>Chen, Pei</au><au>He, Shizhu</au><au>Zhao, Jun</au><au>Liu, Kang</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>ItD: Large Language Models Can Teach Themselves Induction through Deduction</atitle><jtitle>arXiv.org</jtitle><date>2024-03-09</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Although Large Language Models (LLMs) are showing impressive performance on a wide range of Natural Language Processing tasks, researchers have found that they still have limited ability to conduct induction. Recent works mainly adopt ``post processes'' paradigms to improve the performance of LLMs on induction (e.g., the hypothesis search & refinement methods), but their performance is still constrained by the inherent inductive capability of the LLMs. In this paper, we propose a novel framework, Induction through Deduction (ItD), to enable the LLMs to teach themselves induction through deduction. The ItD framework is composed of two main components: a Deductive Data Generation module to generate induction data and a Naive Bayesian Induction module to optimize the fine-tuning and decoding of LLMs. Our empirical results showcase the effectiveness of ItD on two induction benchmarks, achieving relative performance improvement of 36% and 10% compared with previous state-of-the-art, respectively. Our ablation study verifies the effectiveness of two key modules of ItD. We also verify the effectiveness of ItD across different LLMs and deductors. The data and code of this paper can be found at https://anonymous.4open.science/r/ItD-E844.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2955959879 |
source | Free E- Journals |
subjects | Ablation Decoding Effectiveness Large language models Modules Natural language processing Performance enhancement |
title | ItD: Large Language Models Can Teach Themselves Induction through Deduction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T05%3A56%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=ItD:%20Large%20Language%20Models%20Can%20Teach%20Themselves%20Induction%20through%20Deduction&rft.jtitle=arXiv.org&rft.au=Sun,%20Wangtao&rft.date=2024-03-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2955959879%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2955959879&rft_id=info:pmid/&rfr_iscdi=true |