DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning
Recent advances in natural language processing, primarily propelled by Large Language Models (LLMs), have showcased their remarkable capabilities grounded in in-context learning. A promising avenue for guiding LLMs in intricate reasoning tasks involves the utilization of intermediate reasoning steps...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-03 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Xiong, Jing Li, Zixuan Zheng, Chuanyang Guo, Zhijiang Yin, Yichun Xie, Enze Yang, Zhicheng Cao, Qingxing Wang, Haiming Han, Xiongwei Tang, Jing Li, Chengming Liang, Xiaodan |
description | Recent advances in natural language processing, primarily propelled by Large Language Models (LLMs), have showcased their remarkable capabilities grounded in in-context learning. A promising avenue for guiding LLMs in intricate reasoning tasks involves the utilization of intermediate reasoning steps within the Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge lies in the effective selection of exemplars for facilitating in-context learning. In this study, we introduce a framework that leverages Dual Queries and Low-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplars for in-context learning. Dual Queries first query LLM to obtain LLM-generated knowledge such as CoT, then query the retriever to obtain the final exemplars via both question and the knowledge. Moreover, for the second query, LoRe employs dimensionality reduction techniques to refine exemplar selection, ensuring close alignment with the input question's knowledge. Through extensive experiments, we demonstrate that DQ-LoRe significantly outperforms prior state-of-the-art methods in the automatic selection of exemplars for GPT-4, enhancing performance from 92.5% to 94.2%. Our comprehensive analysis further reveals that DQ-LoRe consistently outperforms retrieval-based approaches in terms of both performance and adaptability, especially in scenarios characterized by distribution shifts. DQ-LoRe pushes the boundary of in-context learning and opens up new avenues for addressing complex reasoning challenges. Our code is released at https://github.com/AI4fun/DQ-LoRe}{https://github.com/AI4fun/DQ-LoRe. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2873631318</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2873631318</sourcerecordid><originalsourceid>FETCH-proquest_journals_28736313183</originalsourceid><addsrcrecordid>eNqNjMsKgkAYRocgSMp3-KH1gM7khXahRYGbxL3M4rfGbMbmgj5-LnqAVgfO-fhWJGCcxzQ_MLYhobV9FEUszViS8IA05Z1WusYjlF4McPdoJFqYpHtCpSeohXrBaRyNnuVbOKkV1EjNYqV6QKcN3BQttHI4O6hQGLX4HVl3YrAY_rgl-8u5Ka50ufl4tK7ttTdqSS3LM57ymMc5_2_1BbJVPzI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2873631318</pqid></control><display><type>article</type><title>DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning</title><source>Free E- Journals</source><creator>Xiong, Jing ; Li, Zixuan ; Zheng, Chuanyang ; Guo, Zhijiang ; Yin, Yichun ; Xie, Enze ; Yang, Zhicheng ; Cao, Qingxing ; Wang, Haiming ; Han, Xiongwei ; Tang, Jing ; Li, Chengming ; Liang, Xiaodan</creator><creatorcontrib>Xiong, Jing ; Li, Zixuan ; Zheng, Chuanyang ; Guo, Zhijiang ; Yin, Yichun ; Xie, Enze ; Yang, Zhicheng ; Cao, Qingxing ; Wang, Haiming ; Han, Xiongwei ; Tang, Jing ; Li, Chengming ; Liang, Xiaodan</creatorcontrib><description>Recent advances in natural language processing, primarily propelled by Large Language Models (LLMs), have showcased their remarkable capabilities grounded in in-context learning. A promising avenue for guiding LLMs in intricate reasoning tasks involves the utilization of intermediate reasoning steps within the Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge lies in the effective selection of exemplars for facilitating in-context learning. In this study, we introduce a framework that leverages Dual Queries and Low-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplars for in-context learning. Dual Queries first query LLM to obtain LLM-generated knowledge such as CoT, then query the retriever to obtain the final exemplars via both question and the knowledge. Moreover, for the second query, LoRe employs dimensionality reduction techniques to refine exemplar selection, ensuring close alignment with the input question's knowledge. Through extensive experiments, we demonstrate that DQ-LoRe significantly outperforms prior state-of-the-art methods in the automatic selection of exemplars for GPT-4, enhancing performance from 92.5% to 94.2%. Our comprehensive analysis further reveals that DQ-LoRe consistently outperforms retrieval-based approaches in terms of both performance and adaptability, especially in scenarios characterized by distribution shifts. DQ-LoRe pushes the boundary of in-context learning and opens up new avenues for addressing complex reasoning challenges. Our code is released at https://github.com/AI4fun/DQ-LoRe}{https://github.com/AI4fun/DQ-LoRe.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Approximation ; Context ; Large language models ; Mathematical analysis ; Natural language processing ; Performance enhancement ; Queries ; Questions ; Ranking ; Reasoning</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Xiong, Jing</creatorcontrib><creatorcontrib>Li, Zixuan</creatorcontrib><creatorcontrib>Zheng, Chuanyang</creatorcontrib><creatorcontrib>Guo, Zhijiang</creatorcontrib><creatorcontrib>Yin, Yichun</creatorcontrib><creatorcontrib>Xie, Enze</creatorcontrib><creatorcontrib>Yang, Zhicheng</creatorcontrib><creatorcontrib>Cao, Qingxing</creatorcontrib><creatorcontrib>Wang, Haiming</creatorcontrib><creatorcontrib>Han, Xiongwei</creatorcontrib><creatorcontrib>Tang, Jing</creatorcontrib><creatorcontrib>Li, Chengming</creatorcontrib><creatorcontrib>Liang, Xiaodan</creatorcontrib><title>DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning</title><title>arXiv.org</title><description>Recent advances in natural language processing, primarily propelled by Large Language Models (LLMs), have showcased their remarkable capabilities grounded in in-context learning. A promising avenue for guiding LLMs in intricate reasoning tasks involves the utilization of intermediate reasoning steps within the Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge lies in the effective selection of exemplars for facilitating in-context learning. In this study, we introduce a framework that leverages Dual Queries and Low-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplars for in-context learning. Dual Queries first query LLM to obtain LLM-generated knowledge such as CoT, then query the retriever to obtain the final exemplars via both question and the knowledge. Moreover, for the second query, LoRe employs dimensionality reduction techniques to refine exemplar selection, ensuring close alignment with the input question's knowledge. Through extensive experiments, we demonstrate that DQ-LoRe significantly outperforms prior state-of-the-art methods in the automatic selection of exemplars for GPT-4, enhancing performance from 92.5% to 94.2%. Our comprehensive analysis further reveals that DQ-LoRe consistently outperforms retrieval-based approaches in terms of both performance and adaptability, especially in scenarios characterized by distribution shifts. DQ-LoRe pushes the boundary of in-context learning and opens up new avenues for addressing complex reasoning challenges. Our code is released at https://github.com/AI4fun/DQ-LoRe}{https://github.com/AI4fun/DQ-LoRe.</description><subject>Approximation</subject><subject>Context</subject><subject>Large language models</subject><subject>Mathematical analysis</subject><subject>Natural language processing</subject><subject>Performance enhancement</subject><subject>Queries</subject><subject>Questions</subject><subject>Ranking</subject><subject>Reasoning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjMsKgkAYRocgSMp3-KH1gM7khXahRYGbxL3M4rfGbMbmgj5-LnqAVgfO-fhWJGCcxzQ_MLYhobV9FEUszViS8IA05Z1WusYjlF4McPdoJFqYpHtCpSeohXrBaRyNnuVbOKkV1EjNYqV6QKcN3BQttHI4O6hQGLX4HVl3YrAY_rgl-8u5Ka50ufl4tK7ttTdqSS3LM57ymMc5_2_1BbJVPzI</recordid><startdate>20240302</startdate><enddate>20240302</enddate><creator>Xiong, Jing</creator><creator>Li, Zixuan</creator><creator>Zheng, Chuanyang</creator><creator>Guo, Zhijiang</creator><creator>Yin, Yichun</creator><creator>Xie, Enze</creator><creator>Yang, Zhicheng</creator><creator>Cao, Qingxing</creator><creator>Wang, Haiming</creator><creator>Han, Xiongwei</creator><creator>Tang, Jing</creator><creator>Li, Chengming</creator><creator>Liang, Xiaodan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240302</creationdate><title>DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning</title><author>Xiong, Jing ; Li, Zixuan ; Zheng, Chuanyang ; Guo, Zhijiang ; Yin, Yichun ; Xie, Enze ; Yang, Zhicheng ; Cao, Qingxing ; Wang, Haiming ; Han, Xiongwei ; Tang, Jing ; Li, Chengming ; Liang, Xiaodan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28736313183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Approximation</topic><topic>Context</topic><topic>Large language models</topic><topic>Mathematical analysis</topic><topic>Natural language processing</topic><topic>Performance enhancement</topic><topic>Queries</topic><topic>Questions</topic><topic>Ranking</topic><topic>Reasoning</topic><toplevel>online_resources</toplevel><creatorcontrib>Xiong, Jing</creatorcontrib><creatorcontrib>Li, Zixuan</creatorcontrib><creatorcontrib>Zheng, Chuanyang</creatorcontrib><creatorcontrib>Guo, Zhijiang</creatorcontrib><creatorcontrib>Yin, Yichun</creatorcontrib><creatorcontrib>Xie, Enze</creatorcontrib><creatorcontrib>Yang, Zhicheng</creatorcontrib><creatorcontrib>Cao, Qingxing</creatorcontrib><creatorcontrib>Wang, Haiming</creatorcontrib><creatorcontrib>Han, Xiongwei</creatorcontrib><creatorcontrib>Tang, Jing</creatorcontrib><creatorcontrib>Li, Chengming</creatorcontrib><creatorcontrib>Liang, Xiaodan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xiong, Jing</au><au>Li, Zixuan</au><au>Zheng, Chuanyang</au><au>Guo, Zhijiang</au><au>Yin, Yichun</au><au>Xie, Enze</au><au>Yang, Zhicheng</au><au>Cao, Qingxing</au><au>Wang, Haiming</au><au>Han, Xiongwei</au><au>Tang, Jing</au><au>Li, Chengming</au><au>Liang, Xiaodan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning</atitle><jtitle>arXiv.org</jtitle><date>2024-03-02</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Recent advances in natural language processing, primarily propelled by Large Language Models (LLMs), have showcased their remarkable capabilities grounded in in-context learning. A promising avenue for guiding LLMs in intricate reasoning tasks involves the utilization of intermediate reasoning steps within the Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge lies in the effective selection of exemplars for facilitating in-context learning. In this study, we introduce a framework that leverages Dual Queries and Low-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplars for in-context learning. Dual Queries first query LLM to obtain LLM-generated knowledge such as CoT, then query the retriever to obtain the final exemplars via both question and the knowledge. Moreover, for the second query, LoRe employs dimensionality reduction techniques to refine exemplar selection, ensuring close alignment with the input question's knowledge. Through extensive experiments, we demonstrate that DQ-LoRe significantly outperforms prior state-of-the-art methods in the automatic selection of exemplars for GPT-4, enhancing performance from 92.5% to 94.2%. Our comprehensive analysis further reveals that DQ-LoRe consistently outperforms retrieval-based approaches in terms of both performance and adaptability, especially in scenarios characterized by distribution shifts. DQ-LoRe pushes the boundary of in-context learning and opens up new avenues for addressing complex reasoning challenges. Our code is released at https://github.com/AI4fun/DQ-LoRe}{https://github.com/AI4fun/DQ-LoRe.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2873631318 |
source | Free E- Journals |
subjects | Approximation Context Large language models Mathematical analysis Natural language processing Performance enhancement Queries Questions Ranking Reasoning |
title | DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T18%3A57%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=DQ-LoRe:%20Dual%20Queries%20with%20Low%20Rank%20Approximation%20Re-ranking%20for%20In-Context%20Learning&rft.jtitle=arXiv.org&rft.au=Xiong,%20Jing&rft.date=2024-03-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2873631318%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2873631318&rft_id=info:pmid/&rfr_iscdi=true |