T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning via Mixed Large Language Model Signals for Science Question Answering

Large Language Models (LLMs) have recently demonstrated exceptional performance in various Natural Language Processing (NLP) tasks. They have also shown the ability to perform chain-of-thought (CoT) reasoning to solve complex problems. Recent studies have explored CoT reasoning in complex multimodal...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-12
Hauptverfasser: Wang, Lei, Hu, Yi, He, Jiabang, Xu, Xing, Liu, Ning, Liu, Hui, Heng Tao Shen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Wang, Lei
Hu, Yi
He, Jiabang
Xu, Xing
Liu, Ning
Liu, Hui
Heng Tao Shen
description Large Language Models (LLMs) have recently demonstrated exceptional performance in various Natural Language Processing (NLP) tasks. They have also shown the ability to perform chain-of-thought (CoT) reasoning to solve complex problems. Recent studies have explored CoT reasoning in complex multimodal scenarios, such as the science question answering task, by fine-tuning multimodal models with high-quality human-annotated CoT rationales. However, collecting high-quality COT rationales is usually time-consuming and costly. Besides, the annotated rationales are hardly accurate due to the external essential information missed. To address these issues, we propose a novel method termed T-SciQ that aims at teaching science question answering with LLM signals. The T-SciQ approach generates high-quality CoT rationales as teaching signals and is advanced to train much smaller models to perform CoT reasoning in complex modalities. Additionally, we introduce a novel data mixing strategy to produce more effective teaching data samples for simple and complex science question answer problems. Extensive experimental results show that our T-SciQ method achieves a new state-of-the-art performance on the ScienceQA benchmark, with an accuracy of 96.18%. Moreover, our approach outperforms the most powerful fine-tuned baseline by 4.5%. The code is publicly available at https://github.com/T-SciQ/T-SciQ.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2811070038</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2811070038</sourcerecordid><originalsourceid>FETCH-proquest_journals_28110700383</originalsourceid><addsrcrecordid>eNqNzMGKwjAQxvEgCCvqOwx4DqSJrsXbIit7sAe1dwntNI10M27SuB58eFPwAbzMN4cf_xGbSKUyni-l_GDzEC5CCPm5lquVmrBHyU-VPWygRF211hkoYtfbX6p1B9tWW8ep4WVL0bQ9HFEHcoO6WQ2FvWMNe-0NputM1OkpqMYOTtY43QVoyEPqo6sQDhFDb8nBlwv_6FNlxsZNUjh_7ZQtdt_l9odfPf0N-nyh6IfQWeZZJtZCqFy9p57q804N</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2811070038</pqid></control><display><type>article</type><title>T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning via Mixed Large Language Model Signals for Science Question Answering</title><source>Free E- Journals</source><creator>Wang, Lei ; Hu, Yi ; He, Jiabang ; Xu, Xing ; Liu, Ning ; Liu, Hui ; Heng Tao Shen</creator><creatorcontrib>Wang, Lei ; Hu, Yi ; He, Jiabang ; Xu, Xing ; Liu, Ning ; Liu, Hui ; Heng Tao Shen</creatorcontrib><description>Large Language Models (LLMs) have recently demonstrated exceptional performance in various Natural Language Processing (NLP) tasks. They have also shown the ability to perform chain-of-thought (CoT) reasoning to solve complex problems. Recent studies have explored CoT reasoning in complex multimodal scenarios, such as the science question answering task, by fine-tuning multimodal models with high-quality human-annotated CoT rationales. However, collecting high-quality COT rationales is usually time-consuming and costly. Besides, the annotated rationales are hardly accurate due to the external essential information missed. To address these issues, we propose a novel method termed T-SciQ that aims at teaching science question answering with LLM signals. The T-SciQ approach generates high-quality CoT rationales as teaching signals and is advanced to train much smaller models to perform CoT reasoning in complex modalities. Additionally, we introduce a novel data mixing strategy to produce more effective teaching data samples for simple and complex science question answer problems. Extensive experimental results show that our T-SciQ method achieves a new state-of-the-art performance on the ScienceQA benchmark, with an accuracy of 96.18%. Moreover, our approach outperforms the most powerful fine-tuned baseline by 4.5%. The code is publicly available at https://github.com/T-SciQ/T-SciQ.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Large language models ; Natural language processing ; Questions ; Reasoning ; Science education</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Wang, Lei</creatorcontrib><creatorcontrib>Hu, Yi</creatorcontrib><creatorcontrib>He, Jiabang</creatorcontrib><creatorcontrib>Xu, Xing</creatorcontrib><creatorcontrib>Liu, Ning</creatorcontrib><creatorcontrib>Liu, Hui</creatorcontrib><creatorcontrib>Heng Tao Shen</creatorcontrib><title>T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning via Mixed Large Language Model Signals for Science Question Answering</title><title>arXiv.org</title><description>Large Language Models (LLMs) have recently demonstrated exceptional performance in various Natural Language Processing (NLP) tasks. They have also shown the ability to perform chain-of-thought (CoT) reasoning to solve complex problems. Recent studies have explored CoT reasoning in complex multimodal scenarios, such as the science question answering task, by fine-tuning multimodal models with high-quality human-annotated CoT rationales. However, collecting high-quality COT rationales is usually time-consuming and costly. Besides, the annotated rationales are hardly accurate due to the external essential information missed. To address these issues, we propose a novel method termed T-SciQ that aims at teaching science question answering with LLM signals. The T-SciQ approach generates high-quality CoT rationales as teaching signals and is advanced to train much smaller models to perform CoT reasoning in complex modalities. Additionally, we introduce a novel data mixing strategy to produce more effective teaching data samples for simple and complex science question answer problems. Extensive experimental results show that our T-SciQ method achieves a new state-of-the-art performance on the ScienceQA benchmark, with an accuracy of 96.18%. Moreover, our approach outperforms the most powerful fine-tuned baseline by 4.5%. The code is publicly available at https://github.com/T-SciQ/T-SciQ.</description><subject>Large language models</subject><subject>Natural language processing</subject><subject>Questions</subject><subject>Reasoning</subject><subject>Science education</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNzMGKwjAQxvEgCCvqOwx4DqSJrsXbIit7sAe1dwntNI10M27SuB58eFPwAbzMN4cf_xGbSKUyni-l_GDzEC5CCPm5lquVmrBHyU-VPWygRF211hkoYtfbX6p1B9tWW8ep4WVL0bQ9HFEHcoO6WQ2FvWMNe-0NputM1OkpqMYOTtY43QVoyEPqo6sQDhFDb8nBlwv_6FNlxsZNUjh_7ZQtdt_l9odfPf0N-nyh6IfQWeZZJtZCqFy9p57q804N</recordid><startdate>20231218</startdate><enddate>20231218</enddate><creator>Wang, Lei</creator><creator>Hu, Yi</creator><creator>He, Jiabang</creator><creator>Xu, Xing</creator><creator>Liu, Ning</creator><creator>Liu, Hui</creator><creator>Heng Tao Shen</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231218</creationdate><title>T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning via Mixed Large Language Model Signals for Science Question Answering</title><author>Wang, Lei ; Hu, Yi ; He, Jiabang ; Xu, Xing ; Liu, Ning ; Liu, Hui ; Heng Tao Shen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28110700383</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Large language models</topic><topic>Natural language processing</topic><topic>Questions</topic><topic>Reasoning</topic><topic>Science education</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Lei</creatorcontrib><creatorcontrib>Hu, Yi</creatorcontrib><creatorcontrib>He, Jiabang</creatorcontrib><creatorcontrib>Xu, Xing</creatorcontrib><creatorcontrib>Liu, Ning</creatorcontrib><creatorcontrib>Liu, Hui</creatorcontrib><creatorcontrib>Heng Tao Shen</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied &amp; Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Lei</au><au>Hu, Yi</au><au>He, Jiabang</au><au>Xu, Xing</au><au>Liu, Ning</au><au>Liu, Hui</au><au>Heng Tao Shen</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning via Mixed Large Language Model Signals for Science Question Answering</atitle><jtitle>arXiv.org</jtitle><date>2023-12-18</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Large Language Models (LLMs) have recently demonstrated exceptional performance in various Natural Language Processing (NLP) tasks. They have also shown the ability to perform chain-of-thought (CoT) reasoning to solve complex problems. Recent studies have explored CoT reasoning in complex multimodal scenarios, such as the science question answering task, by fine-tuning multimodal models with high-quality human-annotated CoT rationales. However, collecting high-quality COT rationales is usually time-consuming and costly. Besides, the annotated rationales are hardly accurate due to the external essential information missed. To address these issues, we propose a novel method termed T-SciQ that aims at teaching science question answering with LLM signals. The T-SciQ approach generates high-quality CoT rationales as teaching signals and is advanced to train much smaller models to perform CoT reasoning in complex modalities. Additionally, we introduce a novel data mixing strategy to produce more effective teaching data samples for simple and complex science question answer problems. Extensive experimental results show that our T-SciQ method achieves a new state-of-the-art performance on the ScienceQA benchmark, with an accuracy of 96.18%. Moreover, our approach outperforms the most powerful fine-tuned baseline by 4.5%. The code is publicly available at https://github.com/T-SciQ/T-SciQ.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2811070038
source Free E- Journals
subjects Large language models
Natural language processing
Questions
Reasoning
Science education
title T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning via Mixed Large Language Model Signals for Science Question Answering
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-16T06%3A15%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=T-SciQ:%20Teaching%20Multimodal%20Chain-of-Thought%20Reasoning%20via%20Mixed%20Large%20Language%20Model%20Signals%20for%20Science%20Question%20Answering&rft.jtitle=arXiv.org&rft.au=Wang,%20Lei&rft.date=2023-12-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2811070038%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2811070038&rft_id=info:pmid/&rfr_iscdi=true