FiVL: A Framework for Improved Vision-Language Alignment
Large Vision Language Models (LVLMs) have achieved significant progress in integrating visual and textual inputs for multimodal reasoning. However, a recurring challenge is ensuring these models utilize visual information as effectively as linguistic content when both modalities are necessary to for...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-12 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Aflalo, Estelle Gabriela Ben Melech Stan Le, Tiep Luo, Man Rosenman, Shachar Sayak, Paul Shao-Yen Tseng Lal, Vasudev |
description | Large Vision Language Models (LVLMs) have achieved significant progress in integrating visual and textual inputs for multimodal reasoning. However, a recurring challenge is ensuring these models utilize visual information as effectively as linguistic content when both modalities are necessary to formulate an accurate answer. We hypothesize that hallucinations arise due to the lack of effective visual grounding in current LVLMs. This issue extends to vision-language benchmarks, where it is difficult to make the image indispensable for accurate answer generation, particularly in vision question-answering tasks. In this work, we introduce FiVL, a novel method for constructing datasets designed to train LVLMs for enhanced visual grounding and to evaluate their effectiveness in achieving it. These datasets can be utilized for both training and assessing an LVLM's ability to use image content as substantive evidence rather than relying solely on linguistic priors, providing insights into the model's reliance on visual information. To demonstrate the utility of our dataset, we introduce an innovative training task that outperforms baselines alongside a validation method and application for explainability. The code is available at https://github.com/IntelLabs/fivl. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3147565041</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3147565041</sourcerecordid><originalsourceid>FETCH-proquest_journals_31475650413</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwcMsM87FScFRwK0rMTS3PL8pWSMsvUvDMLSjKL0tNUQjLLM7Mz9P1ScxLL01MT1VwzMlMz8tNzSvhYWBNS8wpTuWF0twMym6uIc4eukCNhaWpxSXxWfmlRXlAqXhjQxNzUzNTAxNDY-JUAQDKljU0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3147565041</pqid></control><display><type>article</type><title>FiVL: A Framework for Improved Vision-Language Alignment</title><source>Free E- Journals</source><creator>Aflalo, Estelle ; Gabriela Ben Melech Stan ; Le, Tiep ; Luo, Man ; Rosenman, Shachar ; Sayak, Paul ; Shao-Yen Tseng ; Lal, Vasudev</creator><creatorcontrib>Aflalo, Estelle ; Gabriela Ben Melech Stan ; Le, Tiep ; Luo, Man ; Rosenman, Shachar ; Sayak, Paul ; Shao-Yen Tseng ; Lal, Vasudev</creatorcontrib><description>Large Vision Language Models (LVLMs) have achieved significant progress in integrating visual and textual inputs for multimodal reasoning. However, a recurring challenge is ensuring these models utilize visual information as effectively as linguistic content when both modalities are necessary to formulate an accurate answer. We hypothesize that hallucinations arise due to the lack of effective visual grounding in current LVLMs. This issue extends to vision-language benchmarks, where it is difficult to make the image indispensable for accurate answer generation, particularly in vision question-answering tasks. In this work, we introduce FiVL, a novel method for constructing datasets designed to train LVLMs for enhanced visual grounding and to evaluate their effectiveness in achieving it. These datasets can be utilized for both training and assessing an LVLM's ability to use image content as substantive evidence rather than relying solely on linguistic priors, providing insights into the model's reliance on visual information. To demonstrate the utility of our dataset, we introduce an innovative training task that outperforms baselines alongside a validation method and application for explainability. The code is available at https://github.com/IntelLabs/fivl.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Effectiveness ; Linguistics ; Vision ; Visual tasks</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Aflalo, Estelle</creatorcontrib><creatorcontrib>Gabriela Ben Melech Stan</creatorcontrib><creatorcontrib>Le, Tiep</creatorcontrib><creatorcontrib>Luo, Man</creatorcontrib><creatorcontrib>Rosenman, Shachar</creatorcontrib><creatorcontrib>Sayak, Paul</creatorcontrib><creatorcontrib>Shao-Yen Tseng</creatorcontrib><creatorcontrib>Lal, Vasudev</creatorcontrib><title>FiVL: A Framework for Improved Vision-Language Alignment</title><title>arXiv.org</title><description>Large Vision Language Models (LVLMs) have achieved significant progress in integrating visual and textual inputs for multimodal reasoning. However, a recurring challenge is ensuring these models utilize visual information as effectively as linguistic content when both modalities are necessary to formulate an accurate answer. We hypothesize that hallucinations arise due to the lack of effective visual grounding in current LVLMs. This issue extends to vision-language benchmarks, where it is difficult to make the image indispensable for accurate answer generation, particularly in vision question-answering tasks. In this work, we introduce FiVL, a novel method for constructing datasets designed to train LVLMs for enhanced visual grounding and to evaluate their effectiveness in achieving it. These datasets can be utilized for both training and assessing an LVLM's ability to use image content as substantive evidence rather than relying solely on linguistic priors, providing insights into the model's reliance on visual information. To demonstrate the utility of our dataset, we introduce an innovative training task that outperforms baselines alongside a validation method and application for explainability. The code is available at https://github.com/IntelLabs/fivl.</description><subject>Datasets</subject><subject>Effectiveness</subject><subject>Linguistics</subject><subject>Vision</subject><subject>Visual tasks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwcMsM87FScFRwK0rMTS3PL8pWSMsvUvDMLSjKL0tNUQjLLM7Mz9P1ScxLL01MT1VwzMlMz8tNzSvhYWBNS8wpTuWF0twMym6uIc4eukCNhaWpxSXxWfmlRXlAqXhjQxNzUzNTAxNDY-JUAQDKljU0</recordid><startdate>20241219</startdate><enddate>20241219</enddate><creator>Aflalo, Estelle</creator><creator>Gabriela Ben Melech Stan</creator><creator>Le, Tiep</creator><creator>Luo, Man</creator><creator>Rosenman, Shachar</creator><creator>Sayak, Paul</creator><creator>Shao-Yen Tseng</creator><creator>Lal, Vasudev</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241219</creationdate><title>FiVL: A Framework for Improved Vision-Language Alignment</title><author>Aflalo, Estelle ; Gabriela Ben Melech Stan ; Le, Tiep ; Luo, Man ; Rosenman, Shachar ; Sayak, Paul ; Shao-Yen Tseng ; Lal, Vasudev</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31475650413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Datasets</topic><topic>Effectiveness</topic><topic>Linguistics</topic><topic>Vision</topic><topic>Visual tasks</topic><toplevel>online_resources</toplevel><creatorcontrib>Aflalo, Estelle</creatorcontrib><creatorcontrib>Gabriela Ben Melech Stan</creatorcontrib><creatorcontrib>Le, Tiep</creatorcontrib><creatorcontrib>Luo, Man</creatorcontrib><creatorcontrib>Rosenman, Shachar</creatorcontrib><creatorcontrib>Sayak, Paul</creatorcontrib><creatorcontrib>Shao-Yen Tseng</creatorcontrib><creatorcontrib>Lal, Vasudev</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Aflalo, Estelle</au><au>Gabriela Ben Melech Stan</au><au>Le, Tiep</au><au>Luo, Man</au><au>Rosenman, Shachar</au><au>Sayak, Paul</au><au>Shao-Yen Tseng</au><au>Lal, Vasudev</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>FiVL: A Framework for Improved Vision-Language Alignment</atitle><jtitle>arXiv.org</jtitle><date>2024-12-19</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large Vision Language Models (LVLMs) have achieved significant progress in integrating visual and textual inputs for multimodal reasoning. However, a recurring challenge is ensuring these models utilize visual information as effectively as linguistic content when both modalities are necessary to formulate an accurate answer. We hypothesize that hallucinations arise due to the lack of effective visual grounding in current LVLMs. This issue extends to vision-language benchmarks, where it is difficult to make the image indispensable for accurate answer generation, particularly in vision question-answering tasks. In this work, we introduce FiVL, a novel method for constructing datasets designed to train LVLMs for enhanced visual grounding and to evaluate their effectiveness in achieving it. These datasets can be utilized for both training and assessing an LVLM's ability to use image content as substantive evidence rather than relying solely on linguistic priors, providing insights into the model's reliance on visual information. To demonstrate the utility of our dataset, we introduce an innovative training task that outperforms baselines alongside a validation method and application for explainability. The code is available at https://github.com/IntelLabs/fivl.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3147565041 |
source | Free E- Journals |
subjects | Datasets Effectiveness Linguistics Vision Visual tasks |
title | FiVL: A Framework for Improved Vision-Language Alignment |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T22%3A27%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=FiVL:%20A%20Framework%20for%20Improved%20Vision-Language%20Alignment&rft.jtitle=arXiv.org&rft.au=Aflalo,%20Estelle&rft.date=2024-12-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3147565041%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3147565041&rft_id=info:pmid/&rfr_iscdi=true |