BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation

The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to interpret and analyze neurological data. This study introduces a novel approach towards the creation of medical foundation models by integrating a large-scale multi-modal magnetic resonance imaging (...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cox, Joseph, Liu, Peng, Stolte, Skylar E, Yang, Yunchao, Liu, Kang, See, Kyle B, Ju, Huiwen, Fang, Ruogu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Cox, Joseph
Liu, Peng
Stolte, Skylar E
Yang, Yunchao
Liu, Kang
See, Kyle B
Ju, Huiwen
Fang, Ruogu
description The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to interpret and analyze neurological data. This study introduces a novel approach towards the creation of medical foundation models by integrating a large-scale multi-modal magnetic resonance imaging (MRI) dataset derived from 41,400 participants in its own. Our method involves a novel two-stage pretraining approach using vision transformers. The first stage is dedicated to encoding anatomical structures in generally healthy brains, identifying key features such as shapes and sizes of different brain regions. The second stage concentrates on spatial information, encompassing aspects like location and the relative positioning of brain structures. We rigorously evaluate our model, BrainFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the complexity of the model and the volume of unlabeled training data derived from generally healthy brains, which enhances the accuracy and predictive capabilities of the model in complex neuroimaging tasks with MRI. The implications of this research provide transformative insights and practical applications in healthcare and make substantial steps towards the creation of foundation models for Medical AI. Our pretrained models and training code can be found at https://github.com/lab-smile/GatorBrain.
doi_str_mv 10.48550/arxiv.2406.10395
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_10395</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_10395</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2406_103953</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zM0MLY05WTwdCpKzMwLTk13yy_NS0ktslIIyS9PLEopVjB2UQCLJZZk5ucp-OanpOYUK6TlFyn4pZYW5WfmJqanKgD15abmlYCV8DCwpiXmFKfyQmluBnk31xBnD12wpfEFRUAtRZXxIMvjwZYbE1YBABAAOeY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation</title><source>arXiv.org</source><creator>Cox, Joseph ; Liu, Peng ; Stolte, Skylar E ; Yang, Yunchao ; Liu, Kang ; See, Kyle B ; Ju, Huiwen ; Fang, Ruogu</creator><creatorcontrib>Cox, Joseph ; Liu, Peng ; Stolte, Skylar E ; Yang, Yunchao ; Liu, Kang ; See, Kyle B ; Ju, Huiwen ; Fang, Ruogu</creatorcontrib><description>The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to interpret and analyze neurological data. This study introduces a novel approach towards the creation of medical foundation models by integrating a large-scale multi-modal magnetic resonance imaging (MRI) dataset derived from 41,400 participants in its own. Our method involves a novel two-stage pretraining approach using vision transformers. The first stage is dedicated to encoding anatomical structures in generally healthy brains, identifying key features such as shapes and sizes of different brain regions. The second stage concentrates on spatial information, encompassing aspects like location and the relative positioning of brain structures. We rigorously evaluate our model, BrainFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the complexity of the model and the volume of unlabeled training data derived from generally healthy brains, which enhances the accuracy and predictive capabilities of the model in complex neuroimaging tasks with MRI. The implications of this research provide transformative insights and practical applications in healthcare and make substantial steps towards the creation of foundation models for Medical AI. Our pretrained models and training code can be found at https://github.com/lab-smile/GatorBrain.</description><identifier>DOI: 10.48550/arxiv.2406.10395</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Quantitative Biology - Neurons and Cognition</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.10395$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.10395$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cox, Joseph</creatorcontrib><creatorcontrib>Liu, Peng</creatorcontrib><creatorcontrib>Stolte, Skylar E</creatorcontrib><creatorcontrib>Yang, Yunchao</creatorcontrib><creatorcontrib>Liu, Kang</creatorcontrib><creatorcontrib>See, Kyle B</creatorcontrib><creatorcontrib>Ju, Huiwen</creatorcontrib><creatorcontrib>Fang, Ruogu</creatorcontrib><title>BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation</title><description>The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to interpret and analyze neurological data. This study introduces a novel approach towards the creation of medical foundation models by integrating a large-scale multi-modal magnetic resonance imaging (MRI) dataset derived from 41,400 participants in its own. Our method involves a novel two-stage pretraining approach using vision transformers. The first stage is dedicated to encoding anatomical structures in generally healthy brains, identifying key features such as shapes and sizes of different brain regions. The second stage concentrates on spatial information, encompassing aspects like location and the relative positioning of brain structures. We rigorously evaluate our model, BrainFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the complexity of the model and the volume of unlabeled training data derived from generally healthy brains, which enhances the accuracy and predictive capabilities of the model in complex neuroimaging tasks with MRI. The implications of this research provide transformative insights and practical applications in healthcare and make substantial steps towards the creation of foundation models for Medical AI. Our pretrained models and training code can be found at https://github.com/lab-smile/GatorBrain.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Quantitative Biology - Neurons and Cognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zM0MLY05WTwdCpKzMwLTk13yy_NS0ktslIIyS9PLEopVjB2UQCLJZZk5ucp-OanpOYUK6TlFyn4pZYW5WfmJqanKgD15abmlYCV8DCwpiXmFKfyQmluBnk31xBnD12wpfEFRUAtRZXxIMvjwZYbE1YBABAAOeY</recordid><startdate>20240614</startdate><enddate>20240614</enddate><creator>Cox, Joseph</creator><creator>Liu, Peng</creator><creator>Stolte, Skylar E</creator><creator>Yang, Yunchao</creator><creator>Liu, Kang</creator><creator>See, Kyle B</creator><creator>Ju, Huiwen</creator><creator>Fang, Ruogu</creator><scope>AKY</scope><scope>ALC</scope><scope>GOX</scope></search><sort><creationdate>20240614</creationdate><title>BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation</title><author>Cox, Joseph ; Liu, Peng ; Stolte, Skylar E ; Yang, Yunchao ; Liu, Kang ; See, Kyle B ; Ju, Huiwen ; Fang, Ruogu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2406_103953</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Quantitative Biology - Neurons and Cognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Cox, Joseph</creatorcontrib><creatorcontrib>Liu, Peng</creatorcontrib><creatorcontrib>Stolte, Skylar E</creatorcontrib><creatorcontrib>Yang, Yunchao</creatorcontrib><creatorcontrib>Liu, Kang</creatorcontrib><creatorcontrib>See, Kyle B</creatorcontrib><creatorcontrib>Ju, Huiwen</creatorcontrib><creatorcontrib>Fang, Ruogu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Quantitative Biology</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cox, Joseph</au><au>Liu, Peng</au><au>Stolte, Skylar E</au><au>Yang, Yunchao</au><au>Liu, Kang</au><au>See, Kyle B</au><au>Ju, Huiwen</au><au>Fang, Ruogu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation</atitle><date>2024-06-14</date><risdate>2024</risdate><abstract>The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to interpret and analyze neurological data. This study introduces a novel approach towards the creation of medical foundation models by integrating a large-scale multi-modal magnetic resonance imaging (MRI) dataset derived from 41,400 participants in its own. Our method involves a novel two-stage pretraining approach using vision transformers. The first stage is dedicated to encoding anatomical structures in generally healthy brains, identifying key features such as shapes and sizes of different brain regions. The second stage concentrates on spatial information, encompassing aspects like location and the relative positioning of brain structures. We rigorously evaluate our model, BrainFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the complexity of the model and the volume of unlabeled training data derived from generally healthy brains, which enhances the accuracy and predictive capabilities of the model in complex neuroimaging tasks with MRI. The implications of this research provide transformative insights and practical applications in healthcare and make substantial steps towards the creation of foundation models for Medical AI. Our pretrained models and training code can be found at https://github.com/lab-smile/GatorBrain.</abstract><doi>10.48550/arxiv.2406.10395</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.10395
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_10395
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Quantitative Biology - Neurons and Cognition
title BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T23%3A37%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=BrainSegFounder:%20Towards%203D%20Foundation%20Models%20for%20Neuroimage%20Segmentation&rft.au=Cox,%20Joseph&rft.date=2024-06-14&rft_id=info:doi/10.48550/arxiv.2406.10395&rft_dat=%3Carxiv_GOX%3E2406_10395%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true