VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning
The ability of large vision-language models (LVLMs) to critique and correct their reasoning is an essential building block towards their self-improvement. However, a systematic analysis of such capabilities in LVLMs is still lacking. We propose VISCO, the first benchmark to extensively analyze the f...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-12 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Wu, Xueqing Ding, Yuheng Li, Bingxuan Pan, Lu Yin, Da Kai-Wei, Chang Peng, Nanyun |
description | The ability of large vision-language models (LVLMs) to critique and correct their reasoning is an essential building block towards their self-improvement. However, a systematic analysis of such capabilities in LVLMs is still lacking. We propose VISCO, the first benchmark to extensively analyze the fine-grained critique and correction capabilities of LVLMs. Compared to existing work that uses a single scalar value to critique the entire reasoning [4], VISCO features dense and fine-grained critique, requiring LVLMs to evaluate the correctness of each step in the chain-of-thought and provide natural language explanations to support their judgments. Extensive evaluation of 24 LVLMs demonstrates that human-written critiques significantly enhance the performance after correction, showcasing the potential of the self-improvement strategy. However, the model-generated critiques are less helpful and sometimes detrimental to the performance, suggesting that critique is the crucial bottleneck. We identified three common patterns in critique failures: failure to critique visual perception, reluctance to "say no", and exaggerated assumption of error propagation. To address these issues, we propose an effective LookBack strategy that revisits the image to verify each piece of information in the initial reasoning. LookBack significantly improves critique and correction performance by up to 13.5%. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3140664126</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3140664126</sourcerecordid><originalsourceid>FETCH-proquest_journals_31406641263</originalsourceid><addsrcrecordid>eNqNjEEKwjAURIMgKNo7fHAdSJMaxaXFaleCilsJ9avR9keTVq9vFh7A1WOYmddjQ6lUyueZlAOWhHAXQkg9k9OpGjI8lvt8u4AlUnVrjH9YukJhCfnam4gz5N629tUhGIrBeY9Vax3BwX2MPwfYY33hZfP07o0NUguW4GhDZ2rYoQmOonHM-hdTB0x-HLFJsTrkGx5fUR3a0911nmJ1UmkmtM5SqdV_qy_nHUYP</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3140664126</pqid></control><display><type>article</type><title>VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning</title><source>Free E- Journals</source><creator>Wu, Xueqing ; Ding, Yuheng ; Li, Bingxuan ; Pan, Lu ; Yin, Da ; Kai-Wei, Chang ; Peng, Nanyun</creator><creatorcontrib>Wu, Xueqing ; Ding, Yuheng ; Li, Bingxuan ; Pan, Lu ; Yin, Da ; Kai-Wei, Chang ; Peng, Nanyun</creatorcontrib><description>The ability of large vision-language models (LVLMs) to critique and correct their reasoning is an essential building block towards their self-improvement. However, a systematic analysis of such capabilities in LVLMs is still lacking. We propose VISCO, the first benchmark to extensively analyze the fine-grained critique and correction capabilities of LVLMs. Compared to existing work that uses a single scalar value to critique the entire reasoning [4], VISCO features dense and fine-grained critique, requiring LVLMs to evaluate the correctness of each step in the chain-of-thought and provide natural language explanations to support their judgments. Extensive evaluation of 24 LVLMs demonstrates that human-written critiques significantly enhance the performance after correction, showcasing the potential of the self-improvement strategy. However, the model-generated critiques are less helpful and sometimes detrimental to the performance, suggesting that critique is the crucial bottleneck. We identified three common patterns in critique failures: failure to critique visual perception, reluctance to "say no", and exaggerated assumption of error propagation. To address these issues, we propose an effective LookBack strategy that revisits the image to verify each piece of information in the initial reasoning. LookBack significantly improves critique and correction performance by up to 13.5%.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Error correction ; Human performance ; Performance enhancement ; Performance evaluation ; Reasoning ; Visual perception</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Wu, Xueqing</creatorcontrib><creatorcontrib>Ding, Yuheng</creatorcontrib><creatorcontrib>Li, Bingxuan</creatorcontrib><creatorcontrib>Pan, Lu</creatorcontrib><creatorcontrib>Yin, Da</creatorcontrib><creatorcontrib>Kai-Wei, Chang</creatorcontrib><creatorcontrib>Peng, Nanyun</creatorcontrib><title>VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning</title><title>arXiv.org</title><description>The ability of large vision-language models (LVLMs) to critique and correct their reasoning is an essential building block towards their self-improvement. However, a systematic analysis of such capabilities in LVLMs is still lacking. We propose VISCO, the first benchmark to extensively analyze the fine-grained critique and correction capabilities of LVLMs. Compared to existing work that uses a single scalar value to critique the entire reasoning [4], VISCO features dense and fine-grained critique, requiring LVLMs to evaluate the correctness of each step in the chain-of-thought and provide natural language explanations to support their judgments. Extensive evaluation of 24 LVLMs demonstrates that human-written critiques significantly enhance the performance after correction, showcasing the potential of the self-improvement strategy. However, the model-generated critiques are less helpful and sometimes detrimental to the performance, suggesting that critique is the crucial bottleneck. We identified three common patterns in critique failures: failure to critique visual perception, reluctance to "say no", and exaggerated assumption of error propagation. To address these issues, we propose an effective LookBack strategy that revisits the image to verify each piece of information in the initial reasoning. LookBack significantly improves critique and correction performance by up to 13.5%.</description><subject>Error correction</subject><subject>Human performance</subject><subject>Performance enhancement</subject><subject>Performance evaluation</subject><subject>Reasoning</subject><subject>Visual perception</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjEEKwjAURIMgKNo7fHAdSJMaxaXFaleCilsJ9avR9keTVq9vFh7A1WOYmddjQ6lUyueZlAOWhHAXQkg9k9OpGjI8lvt8u4AlUnVrjH9YukJhCfnam4gz5N629tUhGIrBeY9Vax3BwX2MPwfYY33hZfP07o0NUguW4GhDZ2rYoQmOonHM-hdTB0x-HLFJsTrkGx5fUR3a0911nmJ1UmkmtM5SqdV_qy_nHUYP</recordid><startdate>20241203</startdate><enddate>20241203</enddate><creator>Wu, Xueqing</creator><creator>Ding, Yuheng</creator><creator>Li, Bingxuan</creator><creator>Pan, Lu</creator><creator>Yin, Da</creator><creator>Kai-Wei, Chang</creator><creator>Peng, Nanyun</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241203</creationdate><title>VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning</title><author>Wu, Xueqing ; Ding, Yuheng ; Li, Bingxuan ; Pan, Lu ; Yin, Da ; Kai-Wei, Chang ; Peng, Nanyun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31406641263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Error correction</topic><topic>Human performance</topic><topic>Performance enhancement</topic><topic>Performance evaluation</topic><topic>Reasoning</topic><topic>Visual perception</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Xueqing</creatorcontrib><creatorcontrib>Ding, Yuheng</creatorcontrib><creatorcontrib>Li, Bingxuan</creatorcontrib><creatorcontrib>Pan, Lu</creatorcontrib><creatorcontrib>Yin, Da</creatorcontrib><creatorcontrib>Kai-Wei, Chang</creatorcontrib><creatorcontrib>Peng, Nanyun</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wu, Xueqing</au><au>Ding, Yuheng</au><au>Li, Bingxuan</au><au>Pan, Lu</au><au>Yin, Da</au><au>Kai-Wei, Chang</au><au>Peng, Nanyun</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning</atitle><jtitle>arXiv.org</jtitle><date>2024-12-03</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>The ability of large vision-language models (LVLMs) to critique and correct their reasoning is an essential building block towards their self-improvement. However, a systematic analysis of such capabilities in LVLMs is still lacking. We propose VISCO, the first benchmark to extensively analyze the fine-grained critique and correction capabilities of LVLMs. Compared to existing work that uses a single scalar value to critique the entire reasoning [4], VISCO features dense and fine-grained critique, requiring LVLMs to evaluate the correctness of each step in the chain-of-thought and provide natural language explanations to support their judgments. Extensive evaluation of 24 LVLMs demonstrates that human-written critiques significantly enhance the performance after correction, showcasing the potential of the self-improvement strategy. However, the model-generated critiques are less helpful and sometimes detrimental to the performance, suggesting that critique is the crucial bottleneck. We identified three common patterns in critique failures: failure to critique visual perception, reluctance to "say no", and exaggerated assumption of error propagation. To address these issues, we propose an effective LookBack strategy that revisits the image to verify each piece of information in the initial reasoning. LookBack significantly improves critique and correction performance by up to 13.5%.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3140664126 |
source | Free E- Journals |
subjects | Error correction Human performance Performance enhancement Performance evaluation Reasoning Visual perception |
title | VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T14%3A38%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=VISCO:%20Benchmarking%20Fine-Grained%20Critique%20and%20Correction%20Towards%20Self-Improvement%20in%20Visual%20Reasoning&rft.jtitle=arXiv.org&rft.au=Wu,%20Xueqing&rft.date=2024-12-03&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3140664126%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3140664126&rft_id=info:pmid/&rfr_iscdi=true |