{\mu}-Bench: A Vision-Language Benchmark for Microscopy Understanding

Recent advances in microscopy have enabled the rapid generation of terabytes of image data in cell biology and biomedical research. Vision-language models (VLMs) offer a promising solution for large-scale biological image analysis, enhancing researchers' efficiency, identifying new image biomar...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lozano, Alejandro, Nirschl, Jeffrey, Burgess, James, Gupte, Sanket Rajan, Zhang, Yuhui, Unell, Alyssa, Yeung-Levy, Serena
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lozano, Alejandro
Nirschl, Jeffrey
Burgess, James
Gupte, Sanket Rajan
Zhang, Yuhui
Unell, Alyssa
Yeung-Levy, Serena
description Recent advances in microscopy have enabled the rapid generation of terabytes of image data in cell biology and biomedical research. Vision-language models (VLMs) offer a promising solution for large-scale biological image analysis, enhancing researchers' efficiency, identifying new image biomarkers, and accelerating hypothesis generation and scientific discovery. However, there is a lack of standardized, diverse, and large-scale vision-language benchmarks to evaluate VLMs' perception and cognition capabilities in biological image understanding. To address this gap, we introduce {\mu}-Bench, an expert-curated benchmark encompassing 22 biomedical tasks across various scientific disciplines (biology, pathology), microscopy modalities (electron, fluorescence, light), scales (subcellular, cellular, tissue), and organisms in both normal and abnormal states. We evaluate state-of-the-art biomedical, pathology, and general VLMs on {\mu}-Bench and find that: i) current models struggle on all categories, even for basic tasks such as distinguishing microscopy modalities; ii) current specialist models fine-tuned on biomedical data often perform worse than generalist models; iii) fine-tuning in specific microscopy domains can cause catastrophic forgetting, eroding prior biomedical knowledge encoded in their base model. iv) weight interpolation between fine-tuned and pre-trained models offers one solution to forgetting and improves general performance across biomedical tasks. We release {\mu}-Bench under a permissive license to accelerate the research and development of microscopy foundation models.
doi_str_mv 10.48550/arxiv.2407.01791
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_01791</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_01791</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_017913</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwNLc05GRwrY7JLa3VdUrNS86wUnBUCMsszszP0_VJzEsvTUxPVQBL5CYWZSuk5Rcp-GYmF-UXJ-cXVCqE5qWkFhWXJOalZOal8zCwpiXmFKfyQmluBnk31xBnD12whfEFRZlAIyrjQRbHgy02JqwCAAdrOG0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>{\mu}-Bench: A Vision-Language Benchmark for Microscopy Understanding</title><source>arXiv.org</source><creator>Lozano, Alejandro ; Nirschl, Jeffrey ; Burgess, James ; Gupte, Sanket Rajan ; Zhang, Yuhui ; Unell, Alyssa ; Yeung-Levy, Serena</creator><creatorcontrib>Lozano, Alejandro ; Nirschl, Jeffrey ; Burgess, James ; Gupte, Sanket Rajan ; Zhang, Yuhui ; Unell, Alyssa ; Yeung-Levy, Serena</creatorcontrib><description>Recent advances in microscopy have enabled the rapid generation of terabytes of image data in cell biology and biomedical research. Vision-language models (VLMs) offer a promising solution for large-scale biological image analysis, enhancing researchers' efficiency, identifying new image biomarkers, and accelerating hypothesis generation and scientific discovery. However, there is a lack of standardized, diverse, and large-scale vision-language benchmarks to evaluate VLMs' perception and cognition capabilities in biological image understanding. To address this gap, we introduce {\mu}-Bench, an expert-curated benchmark encompassing 22 biomedical tasks across various scientific disciplines (biology, pathology), microscopy modalities (electron, fluorescence, light), scales (subcellular, cellular, tissue), and organisms in both normal and abnormal states. We evaluate state-of-the-art biomedical, pathology, and general VLMs on {\mu}-Bench and find that: i) current models struggle on all categories, even for basic tasks such as distinguishing microscopy modalities; ii) current specialist models fine-tuned on biomedical data often perform worse than generalist models; iii) fine-tuning in specific microscopy domains can cause catastrophic forgetting, eroding prior biomedical knowledge encoded in their base model. iv) weight interpolation between fine-tuned and pre-trained models offers one solution to forgetting and improves general performance across biomedical tasks. We release {\mu}-Bench under a permissive license to accelerate the research and development of microscopy foundation models.</description><identifier>DOI: 10.48550/arxiv.2407.01791</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.01791$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.01791$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lozano, Alejandro</creatorcontrib><creatorcontrib>Nirschl, Jeffrey</creatorcontrib><creatorcontrib>Burgess, James</creatorcontrib><creatorcontrib>Gupte, Sanket Rajan</creatorcontrib><creatorcontrib>Zhang, Yuhui</creatorcontrib><creatorcontrib>Unell, Alyssa</creatorcontrib><creatorcontrib>Yeung-Levy, Serena</creatorcontrib><title>{\mu}-Bench: A Vision-Language Benchmark for Microscopy Understanding</title><description>Recent advances in microscopy have enabled the rapid generation of terabytes of image data in cell biology and biomedical research. Vision-language models (VLMs) offer a promising solution for large-scale biological image analysis, enhancing researchers' efficiency, identifying new image biomarkers, and accelerating hypothesis generation and scientific discovery. However, there is a lack of standardized, diverse, and large-scale vision-language benchmarks to evaluate VLMs' perception and cognition capabilities in biological image understanding. To address this gap, we introduce {\mu}-Bench, an expert-curated benchmark encompassing 22 biomedical tasks across various scientific disciplines (biology, pathology), microscopy modalities (electron, fluorescence, light), scales (subcellular, cellular, tissue), and organisms in both normal and abnormal states. We evaluate state-of-the-art biomedical, pathology, and general VLMs on {\mu}-Bench and find that: i) current models struggle on all categories, even for basic tasks such as distinguishing microscopy modalities; ii) current specialist models fine-tuned on biomedical data often perform worse than generalist models; iii) fine-tuning in specific microscopy domains can cause catastrophic forgetting, eroding prior biomedical knowledge encoded in their base model. iv) weight interpolation between fine-tuned and pre-trained models offers one solution to forgetting and improves general performance across biomedical tasks. We release {\mu}-Bench under a permissive license to accelerate the research and development of microscopy foundation models.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwNLc05GRwrY7JLa3VdUrNS86wUnBUCMsszszP0_VJzEsvTUxPVQBL5CYWZSuk5Rcp-GYmF-UXJ-cXVCqE5qWkFhWXJOalZOal8zCwpiXmFKfyQmluBnk31xBnD12whfEFRZlAIyrjQRbHgy02JqwCAAdrOG0</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Lozano, Alejandro</creator><creator>Nirschl, Jeffrey</creator><creator>Burgess, James</creator><creator>Gupte, Sanket Rajan</creator><creator>Zhang, Yuhui</creator><creator>Unell, Alyssa</creator><creator>Yeung-Levy, Serena</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240701</creationdate><title>{\mu}-Bench: A Vision-Language Benchmark for Microscopy Understanding</title><author>Lozano, Alejandro ; Nirschl, Jeffrey ; Burgess, James ; Gupte, Sanket Rajan ; Zhang, Yuhui ; Unell, Alyssa ; Yeung-Levy, Serena</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_017913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Lozano, Alejandro</creatorcontrib><creatorcontrib>Nirschl, Jeffrey</creatorcontrib><creatorcontrib>Burgess, James</creatorcontrib><creatorcontrib>Gupte, Sanket Rajan</creatorcontrib><creatorcontrib>Zhang, Yuhui</creatorcontrib><creatorcontrib>Unell, Alyssa</creatorcontrib><creatorcontrib>Yeung-Levy, Serena</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lozano, Alejandro</au><au>Nirschl, Jeffrey</au><au>Burgess, James</au><au>Gupte, Sanket Rajan</au><au>Zhang, Yuhui</au><au>Unell, Alyssa</au><au>Yeung-Levy, Serena</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>{\mu}-Bench: A Vision-Language Benchmark for Microscopy Understanding</atitle><date>2024-07-01</date><risdate>2024</risdate><abstract>Recent advances in microscopy have enabled the rapid generation of terabytes of image data in cell biology and biomedical research. Vision-language models (VLMs) offer a promising solution for large-scale biological image analysis, enhancing researchers' efficiency, identifying new image biomarkers, and accelerating hypothesis generation and scientific discovery. However, there is a lack of standardized, diverse, and large-scale vision-language benchmarks to evaluate VLMs' perception and cognition capabilities in biological image understanding. To address this gap, we introduce {\mu}-Bench, an expert-curated benchmark encompassing 22 biomedical tasks across various scientific disciplines (biology, pathology), microscopy modalities (electron, fluorescence, light), scales (subcellular, cellular, tissue), and organisms in both normal and abnormal states. We evaluate state-of-the-art biomedical, pathology, and general VLMs on {\mu}-Bench and find that: i) current models struggle on all categories, even for basic tasks such as distinguishing microscopy modalities; ii) current specialist models fine-tuned on biomedical data often perform worse than generalist models; iii) fine-tuning in specific microscopy domains can cause catastrophic forgetting, eroding prior biomedical knowledge encoded in their base model. iv) weight interpolation between fine-tuned and pre-trained models offers one solution to forgetting and improves general performance across biomedical tasks. We release {\mu}-Bench under a permissive license to accelerate the research and development of microscopy foundation models.</abstract><doi>10.48550/arxiv.2407.01791</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.01791
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_01791
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title {\mu}-Bench: A Vision-Language Benchmark for Microscopy Understanding
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T22%3A38%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=%7B%5Cmu%7D-Bench:%20A%20Vision-Language%20Benchmark%20for%20Microscopy%20Understanding&rft.au=Lozano,%20Alejandro&rft.date=2024-07-01&rft_id=info:doi/10.48550/arxiv.2407.01791&rft_dat=%3Carxiv_GOX%3E2407_01791%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true