FastRM: An efficient and automatic explainability framework for multimodal generative models

While Large Vision Language Models (LVLMs) have become masterly capable in reasoning over human prompts and visual inputs, they are still prone to producing responses that contain misinformation. Identifying incorrect responses that are not grounded in evidence has become a crucial task in building...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-12
Hauptverfasser: Gabriela Ben-Melech Stan, Aflalo, Estelle, Luo, Man, Rosenman, Shachar, Le, Tiep, Sayak, Paul, Shao-Yen Tseng, Lal, Vasudev
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Gabriela Ben-Melech Stan
Aflalo, Estelle
Luo, Man
Rosenman, Shachar
Le, Tiep
Sayak, Paul
Shao-Yen Tseng
Lal, Vasudev
description While Large Vision Language Models (LVLMs) have become masterly capable in reasoning over human prompts and visual inputs, they are still prone to producing responses that contain misinformation. Identifying incorrect responses that are not grounded in evidence has become a crucial task in building trustworthy AI. Explainability methods such as gradient-based relevancy maps on LVLM outputs can provide an insight on the decision process of models, however these methods are often computationally expensive and not suited for on-the-fly validation of outputs. In this work, we propose FastRM, an effective way for predicting the explainable Relevancy Maps of LVLM models. Experimental results show that employing FastRM leads to a 99.8% reduction in compute time for relevancy map generation and an 44.4% reduction in memory footprint for the evaluated LVLM, making explainable AI more efficient and practical, thereby facilitating its deployment in real-world applications.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3138994876</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3138994876</sourcerecordid><originalsourceid>FETCH-proquest_journals_31389948763</originalsourceid><addsrcrecordid>eNqNi8sKwjAQRYMgWLT_MOBaqIn24U5EceNGXAplbCeSmiaapD7-3i78AFcXzjl3wCIuxHyWLzgfsdj7JkkSnmZ8uRQRO-_Qh-NhBWsDJKWqFJkAaGrALtgWg6qA3neNyuBFaRU-IB229LLuBtI6aDsdVGtr1HAlQ65_PAl6QNpP2FCi9hT_dsymu-1ps5_dnX105EPZ2M6ZXpViLvKiWORZKv6rvvS9RCw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3138994876</pqid></control><display><type>article</type><title>FastRM: An efficient and automatic explainability framework for multimodal generative models</title><source>Free E- Journals</source><creator>Gabriela Ben-Melech Stan ; Aflalo, Estelle ; Luo, Man ; Rosenman, Shachar ; Le, Tiep ; Sayak, Paul ; Shao-Yen Tseng ; Lal, Vasudev</creator><creatorcontrib>Gabriela Ben-Melech Stan ; Aflalo, Estelle ; Luo, Man ; Rosenman, Shachar ; Le, Tiep ; Sayak, Paul ; Shao-Yen Tseng ; Lal, Vasudev</creatorcontrib><description>While Large Vision Language Models (LVLMs) have become masterly capable in reasoning over human prompts and visual inputs, they are still prone to producing responses that contain misinformation. Identifying incorrect responses that are not grounded in evidence has become a crucial task in building trustworthy AI. Explainability methods such as gradient-based relevancy maps on LVLM outputs can provide an insight on the decision process of models, however these methods are often computationally expensive and not suited for on-the-fly validation of outputs. In this work, we propose FastRM, an effective way for predicting the explainable Relevancy Maps of LVLM models. Experimental results show that employing FastRM leads to a 99.8% reduction in compute time for relevancy map generation and an 44.4% reduction in memory footprint for the evaluated LVLM, making explainable AI more efficient and practical, thereby facilitating its deployment in real-world applications.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Explainable artificial intelligence ; Visual flight ; Visual tasks</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Gabriela Ben-Melech Stan</creatorcontrib><creatorcontrib>Aflalo, Estelle</creatorcontrib><creatorcontrib>Luo, Man</creatorcontrib><creatorcontrib>Rosenman, Shachar</creatorcontrib><creatorcontrib>Le, Tiep</creatorcontrib><creatorcontrib>Sayak, Paul</creatorcontrib><creatorcontrib>Shao-Yen Tseng</creatorcontrib><creatorcontrib>Lal, Vasudev</creatorcontrib><title>FastRM: An efficient and automatic explainability framework for multimodal generative models</title><title>arXiv.org</title><description>While Large Vision Language Models (LVLMs) have become masterly capable in reasoning over human prompts and visual inputs, they are still prone to producing responses that contain misinformation. Identifying incorrect responses that are not grounded in evidence has become a crucial task in building trustworthy AI. Explainability methods such as gradient-based relevancy maps on LVLM outputs can provide an insight on the decision process of models, however these methods are often computationally expensive and not suited for on-the-fly validation of outputs. In this work, we propose FastRM, an effective way for predicting the explainable Relevancy Maps of LVLM models. Experimental results show that employing FastRM leads to a 99.8% reduction in compute time for relevancy map generation and an 44.4% reduction in memory footprint for the evaluated LVLM, making explainable AI more efficient and practical, thereby facilitating its deployment in real-world applications.</description><subject>Explainable artificial intelligence</subject><subject>Visual flight</subject><subject>Visual tasks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi8sKwjAQRYMgWLT_MOBaqIn24U5EceNGXAplbCeSmiaapD7-3i78AFcXzjl3wCIuxHyWLzgfsdj7JkkSnmZ8uRQRO-_Qh-NhBWsDJKWqFJkAaGrALtgWg6qA3neNyuBFaRU-IB229LLuBtI6aDsdVGtr1HAlQ65_PAl6QNpP2FCi9hT_dsymu-1ps5_dnX105EPZ2M6ZXpViLvKiWORZKv6rvvS9RCw</recordid><startdate>20241202</startdate><enddate>20241202</enddate><creator>Gabriela Ben-Melech Stan</creator><creator>Aflalo, Estelle</creator><creator>Luo, Man</creator><creator>Rosenman, Shachar</creator><creator>Le, Tiep</creator><creator>Sayak, Paul</creator><creator>Shao-Yen Tseng</creator><creator>Lal, Vasudev</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241202</creationdate><title>FastRM: An efficient and automatic explainability framework for multimodal generative models</title><author>Gabriela Ben-Melech Stan ; Aflalo, Estelle ; Luo, Man ; Rosenman, Shachar ; Le, Tiep ; Sayak, Paul ; Shao-Yen Tseng ; Lal, Vasudev</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31389948763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Explainable artificial intelligence</topic><topic>Visual flight</topic><topic>Visual tasks</topic><toplevel>online_resources</toplevel><creatorcontrib>Gabriela Ben-Melech Stan</creatorcontrib><creatorcontrib>Aflalo, Estelle</creatorcontrib><creatorcontrib>Luo, Man</creatorcontrib><creatorcontrib>Rosenman, Shachar</creatorcontrib><creatorcontrib>Le, Tiep</creatorcontrib><creatorcontrib>Sayak, Paul</creatorcontrib><creatorcontrib>Shao-Yen Tseng</creatorcontrib><creatorcontrib>Lal, Vasudev</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gabriela Ben-Melech Stan</au><au>Aflalo, Estelle</au><au>Luo, Man</au><au>Rosenman, Shachar</au><au>Le, Tiep</au><au>Sayak, Paul</au><au>Shao-Yen Tseng</au><au>Lal, Vasudev</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>FastRM: An efficient and automatic explainability framework for multimodal generative models</atitle><jtitle>arXiv.org</jtitle><date>2024-12-02</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>While Large Vision Language Models (LVLMs) have become masterly capable in reasoning over human prompts and visual inputs, they are still prone to producing responses that contain misinformation. Identifying incorrect responses that are not grounded in evidence has become a crucial task in building trustworthy AI. Explainability methods such as gradient-based relevancy maps on LVLM outputs can provide an insight on the decision process of models, however these methods are often computationally expensive and not suited for on-the-fly validation of outputs. In this work, we propose FastRM, an effective way for predicting the explainable Relevancy Maps of LVLM models. Experimental results show that employing FastRM leads to a 99.8% reduction in compute time for relevancy map generation and an 44.4% reduction in memory footprint for the evaluated LVLM, making explainable AI more efficient and practical, thereby facilitating its deployment in real-world applications.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_3138994876
source Free E- Journals
subjects Explainable artificial intelligence
Visual flight
Visual tasks
title FastRM: An efficient and automatic explainability framework for multimodal generative models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T22%3A06%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=FastRM:%20An%20efficient%20and%20automatic%20explainability%20framework%20for%20multimodal%20generative%20models&rft.jtitle=arXiv.org&rft.au=Gabriela%20Ben-Melech%20Stan&rft.date=2024-12-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3138994876%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3138994876&rft_id=info:pmid/&rfr_iscdi=true