Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA
Large Multimodal Models (LMMs) have shown remarkable progress in medical Visual Question Answering (Med-VQA), achieving high accuracy on existing benchmarks. However, their reliability under robust evaluation is questionable. This study reveals that when subjected to simple probing evaluation, state...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Yan, Qianqi He, Xuehai Yue, Xiang Wang, Xin Eric |
description | Large Multimodal Models (LMMs) have shown remarkable progress in medical
Visual Question Answering (Med-VQA), achieving high accuracy on existing
benchmarks. However, their reliability under robust evaluation is questionable.
This study reveals that when subjected to simple probing evaluation,
state-of-the-art models perform worse than random guessing on medical diagnosis
questions. To address this critical evaluation problem, we introduce the
Probing Evaluation for Medical Diagnosis (ProbMed) dataset to rigorously assess
LMM performance in medical imaging through probing evaluation and procedural
diagnosis. Particularly, probing evaluation features pairing original questions
with negation questions with hallucinated attributes, while procedural
diagnosis requires reasoning across various diagnostic dimensions for each
image, including modality recognition, organ identification, clinical findings,
abnormalities, and positional grounding. Our evaluation reveals that
top-performing models like GPT-4o, GPT-4V, and Gemini Pro perform worse than
random guessing on specialized diagnostic questions, indicating significant
limitations in handling fine-grained medical inquiries. Besides, models like
LLaVA-Med struggle even with more general questions, and results from CheXagent
demonstrate the transferability of expertise across different modalities of the
same organ, showing that specialized domain knowledge is still crucial for
improving performance. This study underscores the urgent need for more robust
evaluation to ensure the reliability of LMMs in critical fields like medical
diagnosis, and current LMMs are still far from applicable to those fields. |
doi_str_mv | 10.48550/arxiv.2405.20421 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_20421</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_20421</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-840a1be031161de0e5d2505d0af216e76a7aea0969a591ecaab1fa6fa7c45e2e3</originalsourceid><addsrcrecordid>eNotz8tugzAQhWFvuqjSPkBXnReA2oBNWFUoohcJ1FvULtGAh8SSsSNDoubtm6ZdHelfHOlj7EbwOFtKye8wfJtDnGRcxgnPEnHJtl8-TATzFh28o9N-vIfSQTV2GAJOk3Ebe4QPM-4swWvw3SlAdUC7x9l4B36AGsOGoNnb2Yxeo4XGa7ITGAcNadOfyudbecUuBrQTXf_vgq0fqvXqKapfHp9XZR2hykW0zDiKjngqhBKaOEmdSC41xyERinKFORLyQhUoC0E9YicGVAPmfSYpoXTBbv9uz9R2F8yI4dj-ktszOf0BRFVR_Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA</title><source>arXiv.org</source><creator>Yan, Qianqi ; He, Xuehai ; Yue, Xiang ; Wang, Xin Eric</creator><creatorcontrib>Yan, Qianqi ; He, Xuehai ; Yue, Xiang ; Wang, Xin Eric</creatorcontrib><description>Large Multimodal Models (LMMs) have shown remarkable progress in medical
Visual Question Answering (Med-VQA), achieving high accuracy on existing
benchmarks. However, their reliability under robust evaluation is questionable.
This study reveals that when subjected to simple probing evaluation,
state-of-the-art models perform worse than random guessing on medical diagnosis
questions. To address this critical evaluation problem, we introduce the
Probing Evaluation for Medical Diagnosis (ProbMed) dataset to rigorously assess
LMM performance in medical imaging through probing evaluation and procedural
diagnosis. Particularly, probing evaluation features pairing original questions
with negation questions with hallucinated attributes, while procedural
diagnosis requires reasoning across various diagnostic dimensions for each
image, including modality recognition, organ identification, clinical findings,
abnormalities, and positional grounding. Our evaluation reveals that
top-performing models like GPT-4o, GPT-4V, and Gemini Pro perform worse than
random guessing on specialized diagnostic questions, indicating significant
limitations in handling fine-grained medical inquiries. Besides, models like
LLaVA-Med struggle even with more general questions, and results from CheXagent
demonstrate the transferability of expertise across different modalities of the
same organ, showing that specialized domain knowledge is still crucial for
improving performance. This study underscores the urgent need for more robust
evaluation to ensure the reliability of LMMs in critical fields like medical
diagnosis, and current LMMs are still far from applicable to those fields.</description><identifier>DOI: 10.48550/arxiv.2405.20421</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence</subject><creationdate>2024-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.20421$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.20421$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yan, Qianqi</creatorcontrib><creatorcontrib>He, Xuehai</creatorcontrib><creatorcontrib>Yue, Xiang</creatorcontrib><creatorcontrib>Wang, Xin Eric</creatorcontrib><title>Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA</title><description>Large Multimodal Models (LMMs) have shown remarkable progress in medical
Visual Question Answering (Med-VQA), achieving high accuracy on existing
benchmarks. However, their reliability under robust evaluation is questionable.
This study reveals that when subjected to simple probing evaluation,
state-of-the-art models perform worse than random guessing on medical diagnosis
questions. To address this critical evaluation problem, we introduce the
Probing Evaluation for Medical Diagnosis (ProbMed) dataset to rigorously assess
LMM performance in medical imaging through probing evaluation and procedural
diagnosis. Particularly, probing evaluation features pairing original questions
with negation questions with hallucinated attributes, while procedural
diagnosis requires reasoning across various diagnostic dimensions for each
image, including modality recognition, organ identification, clinical findings,
abnormalities, and positional grounding. Our evaluation reveals that
top-performing models like GPT-4o, GPT-4V, and Gemini Pro perform worse than
random guessing on specialized diagnostic questions, indicating significant
limitations in handling fine-grained medical inquiries. Besides, models like
LLaVA-Med struggle even with more general questions, and results from CheXagent
demonstrate the transferability of expertise across different modalities of the
same organ, showing that specialized domain knowledge is still crucial for
improving performance. This study underscores the urgent need for more robust
evaluation to ensure the reliability of LMMs in critical fields like medical
diagnosis, and current LMMs are still far from applicable to those fields.</description><subject>Computer Science - Artificial Intelligence</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8tugzAQhWFvuqjSPkBXnReA2oBNWFUoohcJ1FvULtGAh8SSsSNDoubtm6ZdHelfHOlj7EbwOFtKye8wfJtDnGRcxgnPEnHJtl8-TATzFh28o9N-vIfSQTV2GAJOk3Ebe4QPM-4swWvw3SlAdUC7x9l4B36AGsOGoNnb2Yxeo4XGa7ITGAcNadOfyudbecUuBrQTXf_vgq0fqvXqKapfHp9XZR2hykW0zDiKjngqhBKaOEmdSC41xyERinKFORLyQhUoC0E9YicGVAPmfSYpoXTBbv9uz9R2F8yI4dj-ktszOf0BRFVR_Q</recordid><startdate>20240530</startdate><enddate>20240530</enddate><creator>Yan, Qianqi</creator><creator>He, Xuehai</creator><creator>Yue, Xiang</creator><creator>Wang, Xin Eric</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240530</creationdate><title>Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA</title><author>Yan, Qianqi ; He, Xuehai ; Yue, Xiang ; Wang, Xin Eric</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-840a1be031161de0e5d2505d0af216e76a7aea0969a591ecaab1fa6fa7c45e2e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><toplevel>online_resources</toplevel><creatorcontrib>Yan, Qianqi</creatorcontrib><creatorcontrib>He, Xuehai</creatorcontrib><creatorcontrib>Yue, Xiang</creatorcontrib><creatorcontrib>Wang, Xin Eric</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yan, Qianqi</au><au>He, Xuehai</au><au>Yue, Xiang</au><au>Wang, Xin Eric</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA</atitle><date>2024-05-30</date><risdate>2024</risdate><abstract>Large Multimodal Models (LMMs) have shown remarkable progress in medical
Visual Question Answering (Med-VQA), achieving high accuracy on existing
benchmarks. However, their reliability under robust evaluation is questionable.
This study reveals that when subjected to simple probing evaluation,
state-of-the-art models perform worse than random guessing on medical diagnosis
questions. To address this critical evaluation problem, we introduce the
Probing Evaluation for Medical Diagnosis (ProbMed) dataset to rigorously assess
LMM performance in medical imaging through probing evaluation and procedural
diagnosis. Particularly, probing evaluation features pairing original questions
with negation questions with hallucinated attributes, while procedural
diagnosis requires reasoning across various diagnostic dimensions for each
image, including modality recognition, organ identification, clinical findings,
abnormalities, and positional grounding. Our evaluation reveals that
top-performing models like GPT-4o, GPT-4V, and Gemini Pro perform worse than
random guessing on specialized diagnostic questions, indicating significant
limitations in handling fine-grained medical inquiries. Besides, models like
LLaVA-Med struggle even with more general questions, and results from CheXagent
demonstrate the transferability of expertise across different modalities of the
same organ, showing that specialized domain knowledge is still crucial for
improving performance. This study underscores the urgent need for more robust
evaluation to ensure the reliability of LMMs in critical fields like medical
diagnosis, and current LMMs are still far from applicable to those fields.</abstract><doi>10.48550/arxiv.2405.20421</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2405.20421 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2405_20421 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence |
title | Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T10%3A48%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Worse%20than%20Random?%20An%20Embarrassingly%20Simple%20Probing%20Evaluation%20of%20Large%20Multimodal%20Models%20in%20Medical%20VQA&rft.au=Yan,%20Qianqi&rft.date=2024-05-30&rft_id=info:doi/10.48550/arxiv.2405.20421&rft_dat=%3Carxiv_GOX%3E2405_20421%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |