nnY-Net: Swin-NeXt with Cross-Attention for 3D Medical Images Segmentation

This paper provides a novel 3D medical image segmentation model structure called nnY-Net. This name comes from the fact that our model adds a cross-attention module at the bottom of the U-net structure to form a Y structure. We integrate the advantages of the two latest SOTA models, MedNeXt and Swin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Haixu, Tao, Zerui, Dong, Wenzhen, Sun, Qiuzhuang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Haixu
Tao, Zerui
Dong, Wenzhen
Sun, Qiuzhuang
description This paper provides a novel 3D medical image segmentation model structure called nnY-Net. This name comes from the fact that our model adds a cross-attention module at the bottom of the U-net structure to form a Y structure. We integrate the advantages of the two latest SOTA models, MedNeXt and SwinUNETR, and use Swin Transformer as the encoder and ConvNeXt as the decoder to innovatively design the Swin-NeXt structure. Our model uses the lowest-level feature map of the encoder as Key and Value and uses patient features such as pathology and treatment information as Query to calculate the attention weights in a Cross Attention module. Moreover, we simplify some pre- and post-processing as well as data enhancement methods in 3D image segmentation based on the dynUnet and nnU-net frameworks. We integrate our proposed Swin-NeXt with Cross-Attention framework into this framework. Last, we construct a DiceFocalCELoss to improve the training efficiency for the uneven data convergence of voxel classification.
doi_str_mv 10.48550/arxiv.2501.01406
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2501_01406</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2501_01406</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2501_014063</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjUw1DMwNDEw42TwysuL1PVLLbFSCC7PzAOyIkoUyjNLMhSci_KLi3UdS0pS80oy8_MU0vKLFIxdFHxTUzKTE3MUPHMT01OLFYJT03OBChJBSngYWNMSc4pTeaE0N4O8m2uIs4cu2Nb4gqLM3MSiyniQ7fFg240JqwAAyNM5IQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>nnY-Net: Swin-NeXt with Cross-Attention for 3D Medical Images Segmentation</title><source>arXiv.org</source><creator>Liu, Haixu ; Tao, Zerui ; Dong, Wenzhen ; Sun, Qiuzhuang</creator><creatorcontrib>Liu, Haixu ; Tao, Zerui ; Dong, Wenzhen ; Sun, Qiuzhuang</creatorcontrib><description>This paper provides a novel 3D medical image segmentation model structure called nnY-Net. This name comes from the fact that our model adds a cross-attention module at the bottom of the U-net structure to form a Y structure. We integrate the advantages of the two latest SOTA models, MedNeXt and SwinUNETR, and use Swin Transformer as the encoder and ConvNeXt as the decoder to innovatively design the Swin-NeXt structure. Our model uses the lowest-level feature map of the encoder as Key and Value and uses patient features such as pathology and treatment information as Query to calculate the attention weights in a Cross Attention module. Moreover, we simplify some pre- and post-processing as well as data enhancement methods in 3D image segmentation based on the dynUnet and nnU-net frameworks. We integrate our proposed Swin-NeXt with Cross-Attention framework into this framework. Last, we construct a DiceFocalCELoss to improve the training efficiency for the uneven data convergence of voxel classification.</description><identifier>DOI: 10.48550/arxiv.2501.01406</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2025-01</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2501.01406$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2501.01406$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Haixu</creatorcontrib><creatorcontrib>Tao, Zerui</creatorcontrib><creatorcontrib>Dong, Wenzhen</creatorcontrib><creatorcontrib>Sun, Qiuzhuang</creatorcontrib><title>nnY-Net: Swin-NeXt with Cross-Attention for 3D Medical Images Segmentation</title><description>This paper provides a novel 3D medical image segmentation model structure called nnY-Net. This name comes from the fact that our model adds a cross-attention module at the bottom of the U-net structure to form a Y structure. We integrate the advantages of the two latest SOTA models, MedNeXt and SwinUNETR, and use Swin Transformer as the encoder and ConvNeXt as the decoder to innovatively design the Swin-NeXt structure. Our model uses the lowest-level feature map of the encoder as Key and Value and uses patient features such as pathology and treatment information as Query to calculate the attention weights in a Cross Attention module. Moreover, we simplify some pre- and post-processing as well as data enhancement methods in 3D image segmentation based on the dynUnet and nnU-net frameworks. We integrate our proposed Swin-NeXt with Cross-Attention framework into this framework. Last, we construct a DiceFocalCELoss to improve the training efficiency for the uneven data convergence of voxel classification.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjUw1DMwNDEw42TwysuL1PVLLbFSCC7PzAOyIkoUyjNLMhSci_KLi3UdS0pS80oy8_MU0vKLFIxdFHxTUzKTE3MUPHMT01OLFYJT03OBChJBSngYWNMSc4pTeaE0N4O8m2uIs4cu2Nb4gqLM3MSiyniQ7fFg240JqwAAyNM5IQ</recordid><startdate>20250102</startdate><enddate>20250102</enddate><creator>Liu, Haixu</creator><creator>Tao, Zerui</creator><creator>Dong, Wenzhen</creator><creator>Sun, Qiuzhuang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20250102</creationdate><title>nnY-Net: Swin-NeXt with Cross-Attention for 3D Medical Images Segmentation</title><author>Liu, Haixu ; Tao, Zerui ; Dong, Wenzhen ; Sun, Qiuzhuang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2501_014063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Haixu</creatorcontrib><creatorcontrib>Tao, Zerui</creatorcontrib><creatorcontrib>Dong, Wenzhen</creatorcontrib><creatorcontrib>Sun, Qiuzhuang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Haixu</au><au>Tao, Zerui</au><au>Dong, Wenzhen</au><au>Sun, Qiuzhuang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>nnY-Net: Swin-NeXt with Cross-Attention for 3D Medical Images Segmentation</atitle><date>2025-01-02</date><risdate>2025</risdate><abstract>This paper provides a novel 3D medical image segmentation model structure called nnY-Net. This name comes from the fact that our model adds a cross-attention module at the bottom of the U-net structure to form a Y structure. We integrate the advantages of the two latest SOTA models, MedNeXt and SwinUNETR, and use Swin Transformer as the encoder and ConvNeXt as the decoder to innovatively design the Swin-NeXt structure. Our model uses the lowest-level feature map of the encoder as Key and Value and uses patient features such as pathology and treatment information as Query to calculate the attention weights in a Cross Attention module. Moreover, we simplify some pre- and post-processing as well as data enhancement methods in 3D image segmentation based on the dynUnet and nnU-net frameworks. We integrate our proposed Swin-NeXt with Cross-Attention framework into this framework. Last, we construct a DiceFocalCELoss to improve the training efficiency for the uneven data convergence of voxel classification.</abstract><doi>10.48550/arxiv.2501.01406</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2501.01406
ispartof
issn
language eng
recordid cdi_arxiv_primary_2501_01406
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title nnY-Net: Swin-NeXt with Cross-Attention for 3D Medical Images Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T22%3A23%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=nnY-Net:%20Swin-NeXt%20with%20Cross-Attention%20for%203D%20Medical%20Images%20Segmentation&rft.au=Liu,%20Haixu&rft.date=2025-01-02&rft_id=info:doi/10.48550/arxiv.2501.01406&rft_dat=%3Carxiv_GOX%3E2501_01406%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true