Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language Bootstrapping

Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across multimodal tasks such as visual perception and reasoning, leading to good performance on various multimodal evaluation benchmarks. However, these benchmarks keep a static nature and overlap with the pre-training da...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yang, Yue, Zhang, Shuibai, Shao, Wenqi, Zhang, Kaipeng, Bin, Yi, Wang, Yu, Luo, Ping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yang, Yue
Zhang, Shuibai
Shao, Wenqi
Zhang, Kaipeng
Bin, Yi
Wang, Yu
Luo, Ping
description Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across multimodal tasks such as visual perception and reasoning, leading to good performance on various multimodal evaluation benchmarks. However, these benchmarks keep a static nature and overlap with the pre-training data, resulting in fixed complexity constraints and data contamination issues. This raises the concern regarding the validity of the evaluation. To address these two challenges, we introduce a dynamic multimodal evaluation protocol called Vision-Language Bootstrapping (VLB). VLB provides a robust and comprehensive assessment for LVLMs with reduced data contamination and flexible complexity. To this end, VLB dynamically generates new visual question-answering samples through a multimodal bootstrapping module that modifies both images and language, while ensuring that newly generated samples remain consistent with the original ones by a judge module. By composing various bootstrapping strategies, VLB offers dynamic variants of existing benchmarks with diverse complexities, enabling the evaluation to co-evolve with the ever-evolving capabilities of LVLMs. Extensive experimental results across multiple benchmarks, including SEEDBench, MMBench, and MME, show that VLB significantly reduces data contamination and exposes performance limitations of LVLMs.
doi_str_mv 10.48550/arxiv.2410.08695
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_08695</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_08695</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_086953</originalsourceid><addsrcrecordid>eNqFjrEOgjAURbs4GPUDnHw_AKKCwVWEOOhmdCQPrfiS0jalIP17hbg73ZObMxzG5qvAD-MoCpZoOmr9dfg9gni7i8bsdnASK7rDuRGWKvVAAWmLokFLSsKb7AsywTsqBIdEVbpn66BwcKX6q3gnlGWDJYe9Ura2BrUmWU7Z6Imi5rPfTtgiSy_J0RsScm2oQuPyPiUfUjb_jQ9A1kAO</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language Bootstrapping</title><source>arXiv.org</source><creator>Yang, Yue ; Zhang, Shuibai ; Shao, Wenqi ; Zhang, Kaipeng ; Bin, Yi ; Wang, Yu ; Luo, Ping</creator><creatorcontrib>Yang, Yue ; Zhang, Shuibai ; Shao, Wenqi ; Zhang, Kaipeng ; Bin, Yi ; Wang, Yu ; Luo, Ping</creatorcontrib><description>Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across multimodal tasks such as visual perception and reasoning, leading to good performance on various multimodal evaluation benchmarks. However, these benchmarks keep a static nature and overlap with the pre-training data, resulting in fixed complexity constraints and data contamination issues. This raises the concern regarding the validity of the evaluation. To address these two challenges, we introduce a dynamic multimodal evaluation protocol called Vision-Language Bootstrapping (VLB). VLB provides a robust and comprehensive assessment for LVLMs with reduced data contamination and flexible complexity. To this end, VLB dynamically generates new visual question-answering samples through a multimodal bootstrapping module that modifies both images and language, while ensuring that newly generated samples remain consistent with the original ones by a judge module. By composing various bootstrapping strategies, VLB offers dynamic variants of existing benchmarks with diverse complexities, enabling the evaluation to co-evolve with the ever-evolving capabilities of LVLMs. Extensive experimental results across multiple benchmarks, including SEEDBench, MMBench, and MME, show that VLB significantly reduces data contamination and exposes performance limitations of LVLMs.</description><identifier>DOI: 10.48550/arxiv.2410.08695</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.08695$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.08695$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Yue</creatorcontrib><creatorcontrib>Zhang, Shuibai</creatorcontrib><creatorcontrib>Shao, Wenqi</creatorcontrib><creatorcontrib>Zhang, Kaipeng</creatorcontrib><creatorcontrib>Bin, Yi</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><creatorcontrib>Luo, Ping</creatorcontrib><title>Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language Bootstrapping</title><description>Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across multimodal tasks such as visual perception and reasoning, leading to good performance on various multimodal evaluation benchmarks. However, these benchmarks keep a static nature and overlap with the pre-training data, resulting in fixed complexity constraints and data contamination issues. This raises the concern regarding the validity of the evaluation. To address these two challenges, we introduce a dynamic multimodal evaluation protocol called Vision-Language Bootstrapping (VLB). VLB provides a robust and comprehensive assessment for LVLMs with reduced data contamination and flexible complexity. To this end, VLB dynamically generates new visual question-answering samples through a multimodal bootstrapping module that modifies both images and language, while ensuring that newly generated samples remain consistent with the original ones by a judge module. By composing various bootstrapping strategies, VLB offers dynamic variants of existing benchmarks with diverse complexities, enabling the evaluation to co-evolve with the ever-evolving capabilities of LVLMs. Extensive experimental results across multiple benchmarks, including SEEDBench, MMBench, and MME, show that VLB significantly reduces data contamination and exposes performance limitations of LVLMs.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrEOgjAURbs4GPUDnHw_AKKCwVWEOOhmdCQPrfiS0jalIP17hbg73ZObMxzG5qvAD-MoCpZoOmr9dfg9gni7i8bsdnASK7rDuRGWKvVAAWmLokFLSsKb7AsywTsqBIdEVbpn66BwcKX6q3gnlGWDJYe9Ura2BrUmWU7Z6Imi5rPfTtgiSy_J0RsScm2oQuPyPiUfUjb_jQ9A1kAO</recordid><startdate>20241011</startdate><enddate>20241011</enddate><creator>Yang, Yue</creator><creator>Zhang, Shuibai</creator><creator>Shao, Wenqi</creator><creator>Zhang, Kaipeng</creator><creator>Bin, Yi</creator><creator>Wang, Yu</creator><creator>Luo, Ping</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241011</creationdate><title>Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language Bootstrapping</title><author>Yang, Yue ; Zhang, Shuibai ; Shao, Wenqi ; Zhang, Kaipeng ; Bin, Yi ; Wang, Yu ; Luo, Ping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_086953</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Yue</creatorcontrib><creatorcontrib>Zhang, Shuibai</creatorcontrib><creatorcontrib>Shao, Wenqi</creatorcontrib><creatorcontrib>Zhang, Kaipeng</creatorcontrib><creatorcontrib>Bin, Yi</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><creatorcontrib>Luo, Ping</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Yue</au><au>Zhang, Shuibai</au><au>Shao, Wenqi</au><au>Zhang, Kaipeng</au><au>Bin, Yi</au><au>Wang, Yu</au><au>Luo, Ping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language Bootstrapping</atitle><date>2024-10-11</date><risdate>2024</risdate><abstract>Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across multimodal tasks such as visual perception and reasoning, leading to good performance on various multimodal evaluation benchmarks. However, these benchmarks keep a static nature and overlap with the pre-training data, resulting in fixed complexity constraints and data contamination issues. This raises the concern regarding the validity of the evaluation. To address these two challenges, we introduce a dynamic multimodal evaluation protocol called Vision-Language Bootstrapping (VLB). VLB provides a robust and comprehensive assessment for LVLMs with reduced data contamination and flexible complexity. To this end, VLB dynamically generates new visual question-answering samples through a multimodal bootstrapping module that modifies both images and language, while ensuring that newly generated samples remain consistent with the original ones by a judge module. By composing various bootstrapping strategies, VLB offers dynamic variants of existing benchmarks with diverse complexities, enabling the evaluation to co-evolve with the ever-evolving capabilities of LVLMs. Extensive experimental results across multiple benchmarks, including SEEDBench, MMBench, and MME, show that VLB significantly reduces data contamination and exposes performance limitations of LVLMs.</abstract><doi>10.48550/arxiv.2410.08695</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.08695
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_08695
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language Bootstrapping
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T12%3A27%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20Multimodal%20Evaluation%20with%20Flexible%20Complexity%20by%20Vision-Language%20Bootstrapping&rft.au=Yang,%20Yue&rft.date=2024-10-11&rft_id=info:doi/10.48550/arxiv.2410.08695&rft_dat=%3Carxiv_GOX%3E2410_08695%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true