VisionZip: Longer is Better but Not Necessary in Vision Language Models

Recent advancements in vision-language models have enhanced performance by increasing the length of visual tokens, making them much longer than text tokens and significantly raising computational costs. However, we observe that the visual tokens generated by popular vision encoders, such as CLIP and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yang, Senqiao, Chen, Yukang, Tian, Zhuotao, Wang, Chengyao, Li, Jingyao, Yu, Bei, Jia, Jiaya
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yang, Senqiao
Chen, Yukang
Tian, Zhuotao
Wang, Chengyao
Li, Jingyao
Yu, Bei
Jia, Jiaya
description Recent advancements in vision-language models have enhanced performance by increasing the length of visual tokens, making them much longer than text tokens and significantly raising computational costs. However, we observe that the visual tokens generated by popular vision encoders, such as CLIP and SigLIP, contain significant redundancy. To address this, we introduce VisionZip, a simple yet effective method that selects a set of informative tokens for input to the language model, reducing visual token redundancy and improving efficiency while maintaining model performance. The proposed VisionZip can be widely applied to image and video understanding tasks and is well-suited for multi-turn dialogues in real-world scenarios, where previous methods tend to underperform. Experimental results show that VisionZip outperforms the previous state-of-the-art method by at least 5% performance gains across nearly all settings. Moreover, our method significantly enhances model inference speed, improving the prefilling time by 8x and enabling the LLaVA-Next 13B model to infer faster than the LLaVA-Next 7B model while achieving better results. Furthermore, we analyze the causes of this redundancy and encourage the community to focus on extracting better visual features rather than merely increasing token length. Our code is available at https://github.com/dvlab-research/VisionZip .
doi_str_mv 10.48550/arxiv.2412.04467
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_04467</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_04467</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_044673</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jMwMTEz52RwD8sszszPi8ossFLwyc9LTy1SyCxWcEotKQGykkpLFPzygTg1ObW4OLGoUiEzTwGiQcEnMS-9NDE9VcE3PyU1p5iHgTUtMac4lRdKczPIu7mGOHvogq2MLyjKzAXqjwdZHQ-22piwCgBdFTij</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>VisionZip: Longer is Better but Not Necessary in Vision Language Models</title><source>arXiv.org</source><creator>Yang, Senqiao ; Chen, Yukang ; Tian, Zhuotao ; Wang, Chengyao ; Li, Jingyao ; Yu, Bei ; Jia, Jiaya</creator><creatorcontrib>Yang, Senqiao ; Chen, Yukang ; Tian, Zhuotao ; Wang, Chengyao ; Li, Jingyao ; Yu, Bei ; Jia, Jiaya</creatorcontrib><description>Recent advancements in vision-language models have enhanced performance by increasing the length of visual tokens, making them much longer than text tokens and significantly raising computational costs. However, we observe that the visual tokens generated by popular vision encoders, such as CLIP and SigLIP, contain significant redundancy. To address this, we introduce VisionZip, a simple yet effective method that selects a set of informative tokens for input to the language model, reducing visual token redundancy and improving efficiency while maintaining model performance. The proposed VisionZip can be widely applied to image and video understanding tasks and is well-suited for multi-turn dialogues in real-world scenarios, where previous methods tend to underperform. Experimental results show that VisionZip outperforms the previous state-of-the-art method by at least 5% performance gains across nearly all settings. Moreover, our method significantly enhances model inference speed, improving the prefilling time by 8x and enabling the LLaVA-Next 13B model to infer faster than the LLaVA-Next 7B model while achieving better results. Furthermore, we analyze the causes of this redundancy and encourage the community to focus on extracting better visual features rather than merely increasing token length. Our code is available at https://github.com/dvlab-research/VisionZip .</description><identifier>DOI: 10.48550/arxiv.2412.04467</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2024-12</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.04467$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.04467$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Senqiao</creatorcontrib><creatorcontrib>Chen, Yukang</creatorcontrib><creatorcontrib>Tian, Zhuotao</creatorcontrib><creatorcontrib>Wang, Chengyao</creatorcontrib><creatorcontrib>Li, Jingyao</creatorcontrib><creatorcontrib>Yu, Bei</creatorcontrib><creatorcontrib>Jia, Jiaya</creatorcontrib><title>VisionZip: Longer is Better but Not Necessary in Vision Language Models</title><description>Recent advancements in vision-language models have enhanced performance by increasing the length of visual tokens, making them much longer than text tokens and significantly raising computational costs. However, we observe that the visual tokens generated by popular vision encoders, such as CLIP and SigLIP, contain significant redundancy. To address this, we introduce VisionZip, a simple yet effective method that selects a set of informative tokens for input to the language model, reducing visual token redundancy and improving efficiency while maintaining model performance. The proposed VisionZip can be widely applied to image and video understanding tasks and is well-suited for multi-turn dialogues in real-world scenarios, where previous methods tend to underperform. Experimental results show that VisionZip outperforms the previous state-of-the-art method by at least 5% performance gains across nearly all settings. Moreover, our method significantly enhances model inference speed, improving the prefilling time by 8x and enabling the LLaVA-Next 13B model to infer faster than the LLaVA-Next 7B model while achieving better results. Furthermore, we analyze the causes of this redundancy and encourage the community to focus on extracting better visual features rather than merely increasing token length. Our code is available at https://github.com/dvlab-research/VisionZip .</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jMwMTEz52RwD8sszszPi8ossFLwyc9LTy1SyCxWcEotKQGykkpLFPzygTg1ObW4OLGoUiEzTwGiQcEnMS-9NDE9VcE3PyU1p5iHgTUtMac4lRdKczPIu7mGOHvogq2MLyjKzAXqjwdZHQ-22piwCgBdFTij</recordid><startdate>20241205</startdate><enddate>20241205</enddate><creator>Yang, Senqiao</creator><creator>Chen, Yukang</creator><creator>Tian, Zhuotao</creator><creator>Wang, Chengyao</creator><creator>Li, Jingyao</creator><creator>Yu, Bei</creator><creator>Jia, Jiaya</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241205</creationdate><title>VisionZip: Longer is Better but Not Necessary in Vision Language Models</title><author>Yang, Senqiao ; Chen, Yukang ; Tian, Zhuotao ; Wang, Chengyao ; Li, Jingyao ; Yu, Bei ; Jia, Jiaya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_044673</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Senqiao</creatorcontrib><creatorcontrib>Chen, Yukang</creatorcontrib><creatorcontrib>Tian, Zhuotao</creatorcontrib><creatorcontrib>Wang, Chengyao</creatorcontrib><creatorcontrib>Li, Jingyao</creatorcontrib><creatorcontrib>Yu, Bei</creatorcontrib><creatorcontrib>Jia, Jiaya</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Senqiao</au><au>Chen, Yukang</au><au>Tian, Zhuotao</au><au>Wang, Chengyao</au><au>Li, Jingyao</au><au>Yu, Bei</au><au>Jia, Jiaya</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>VisionZip: Longer is Better but Not Necessary in Vision Language Models</atitle><date>2024-12-05</date><risdate>2024</risdate><abstract>Recent advancements in vision-language models have enhanced performance by increasing the length of visual tokens, making them much longer than text tokens and significantly raising computational costs. However, we observe that the visual tokens generated by popular vision encoders, such as CLIP and SigLIP, contain significant redundancy. To address this, we introduce VisionZip, a simple yet effective method that selects a set of informative tokens for input to the language model, reducing visual token redundancy and improving efficiency while maintaining model performance. The proposed VisionZip can be widely applied to image and video understanding tasks and is well-suited for multi-turn dialogues in real-world scenarios, where previous methods tend to underperform. Experimental results show that VisionZip outperforms the previous state-of-the-art method by at least 5% performance gains across nearly all settings. Moreover, our method significantly enhances model inference speed, improving the prefilling time by 8x and enabling the LLaVA-Next 13B model to infer faster than the LLaVA-Next 7B model while achieving better results. Furthermore, we analyze the causes of this redundancy and encourage the community to focus on extracting better visual features rather than merely increasing token length. Our code is available at https://github.com/dvlab-research/VisionZip .</abstract><doi>10.48550/arxiv.2412.04467</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2412.04467
ispartof
issn
language eng
recordid cdi_arxiv_primary_2412_04467
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title VisionZip: Longer is Better but Not Necessary in Vision Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T18%3A08%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=VisionZip:%20Longer%20is%20Better%20but%20Not%20Necessary%20in%20Vision%20Language%20Models&rft.au=Yang,%20Senqiao&rft.date=2024-12-05&rft_id=info:doi/10.48550/arxiv.2412.04467&rft_dat=%3Carxiv_GOX%3E2412_04467%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true