METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy
Novel multimodal imaging methods are capable of generating extensive, super high resolution datasets for preclinical research. Yet, a massive lack of annotations prevents the broad use of deep learning to analyze such data. So far, existing generative models fail to mitigate this problem because of...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Horvath, Izabela Paetzold, Johannes C Schoppe, Oliver Al-Maskari, Rami Ezhov, Ivan Shit, Suprosanna Li, Hongwei Ertuerk, Ali Menze, Bjoern H |
description | Novel multimodal imaging methods are capable of generating extensive, super
high resolution datasets for preclinical research. Yet, a massive lack of
annotations prevents the broad use of deep learning to analyze such data. So
far, existing generative models fail to mitigate this problem because of
frequent labeling errors. In this paper, we introduce a novel generative method
which leverages real anatomical information to generate realistic image-label
pairs of tumours. We construct a dual-pathway generator, for the anatomical
image and label, trained in a cycle-consistent setup, constrained by an
independent, pretrained segmentor. The generated images yield significant
quantitative improvement compared to existing methods. To validate the quality
of synthesis, we train segmentation networks on a dataset augmented with the
synthetic data, substantially improving the segmentation over baseline. |
doi_str_mv | 10.48550/arxiv.2104.10993 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2104_10993</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2104_10993</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-e5c9851ac6d451d24f48877bdc16d887b6b7666e7da6b27b19621f1f6289e5f03</originalsourceid><addsrcrecordid>eNotz7tOwzAAhWEvDKjwAEz4BRJsJ76xVVUJlRIYGubI8aWx1DqR41bk7YHS6fzTkT4AnjDKS0EpelHx219yglGZYyRlcQ--mm1brT9eYWWDjSr5i4Xt-TSeI9yFSfmQfDhAFQxsRqOOPi1wv4Q02NnP0AdY-8OQ4H6wNsHG6zjOepyWB3Dn1HG2j7ddgfZt227es_qz2m3WdaYYLzJLtRQUK81MSbEhpSuF4Lw3GjPzWz3rOWPMcqNYT3iPJSPYYceIkJY6VKzA8__t1dVN0Z9UXLo_X3f1FT94cEqp</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy</title><source>arXiv.org</source><creator>Horvath, Izabela ; Paetzold, Johannes C ; Schoppe, Oliver ; Al-Maskari, Rami ; Ezhov, Ivan ; Shit, Suprosanna ; Li, Hongwei ; Ertuerk, Ali ; Menze, Bjoern H</creator><creatorcontrib>Horvath, Izabela ; Paetzold, Johannes C ; Schoppe, Oliver ; Al-Maskari, Rami ; Ezhov, Ivan ; Shit, Suprosanna ; Li, Hongwei ; Ertuerk, Ali ; Menze, Bjoern H</creatorcontrib><description>Novel multimodal imaging methods are capable of generating extensive, super
high resolution datasets for preclinical research. Yet, a massive lack of
annotations prevents the broad use of deep learning to analyze such data. So
far, existing generative models fail to mitigate this problem because of
frequent labeling errors. In this paper, we introduce a novel generative method
which leverages real anatomical information to generate realistic image-label
pairs of tumours. We construct a dual-pathway generator, for the anatomical
image and label, trained in a cycle-consistent setup, constrained by an
independent, pretrained segmentor. The generated images yield significant
quantitative improvement compared to existing methods. To validate the quality
of synthesis, we train segmentation networks on a dataset augmented with the
synthetic data, substantially improving the segmentation over baseline.</description><identifier>DOI: 10.48550/arxiv.2104.10993</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2021-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2104.10993$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2104.10993$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Horvath, Izabela</creatorcontrib><creatorcontrib>Paetzold, Johannes C</creatorcontrib><creatorcontrib>Schoppe, Oliver</creatorcontrib><creatorcontrib>Al-Maskari, Rami</creatorcontrib><creatorcontrib>Ezhov, Ivan</creatorcontrib><creatorcontrib>Shit, Suprosanna</creatorcontrib><creatorcontrib>Li, Hongwei</creatorcontrib><creatorcontrib>Ertuerk, Ali</creatorcontrib><creatorcontrib>Menze, Bjoern H</creatorcontrib><title>METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy</title><description>Novel multimodal imaging methods are capable of generating extensive, super
high resolution datasets for preclinical research. Yet, a massive lack of
annotations prevents the broad use of deep learning to analyze such data. So
far, existing generative models fail to mitigate this problem because of
frequent labeling errors. In this paper, we introduce a novel generative method
which leverages real anatomical information to generate realistic image-label
pairs of tumours. We construct a dual-pathway generator, for the anatomical
image and label, trained in a cycle-consistent setup, constrained by an
independent, pretrained segmentor. The generated images yield significant
quantitative improvement compared to existing methods. To validate the quality
of synthesis, we train segmentation networks on a dataset augmented with the
synthetic data, substantially improving the segmentation over baseline.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7tOwzAAhWEvDKjwAEz4BRJsJ76xVVUJlRIYGubI8aWx1DqR41bk7YHS6fzTkT4AnjDKS0EpelHx219yglGZYyRlcQ--mm1brT9eYWWDjSr5i4Xt-TSeI9yFSfmQfDhAFQxsRqOOPi1wv4Q02NnP0AdY-8OQ4H6wNsHG6zjOepyWB3Dn1HG2j7ddgfZt227es_qz2m3WdaYYLzJLtRQUK81MSbEhpSuF4Lw3GjPzWz3rOWPMcqNYT3iPJSPYYceIkJY6VKzA8__t1dVN0Z9UXLo_X3f1FT94cEqp</recordid><startdate>20210422</startdate><enddate>20210422</enddate><creator>Horvath, Izabela</creator><creator>Paetzold, Johannes C</creator><creator>Schoppe, Oliver</creator><creator>Al-Maskari, Rami</creator><creator>Ezhov, Ivan</creator><creator>Shit, Suprosanna</creator><creator>Li, Hongwei</creator><creator>Ertuerk, Ali</creator><creator>Menze, Bjoern H</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210422</creationdate><title>METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy</title><author>Horvath, Izabela ; Paetzold, Johannes C ; Schoppe, Oliver ; Al-Maskari, Rami ; Ezhov, Ivan ; Shit, Suprosanna ; Li, Hongwei ; Ertuerk, Ali ; Menze, Bjoern H</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-e5c9851ac6d451d24f48877bdc16d887b6b7666e7da6b27b19621f1f6289e5f03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Horvath, Izabela</creatorcontrib><creatorcontrib>Paetzold, Johannes C</creatorcontrib><creatorcontrib>Schoppe, Oliver</creatorcontrib><creatorcontrib>Al-Maskari, Rami</creatorcontrib><creatorcontrib>Ezhov, Ivan</creatorcontrib><creatorcontrib>Shit, Suprosanna</creatorcontrib><creatorcontrib>Li, Hongwei</creatorcontrib><creatorcontrib>Ertuerk, Ali</creatorcontrib><creatorcontrib>Menze, Bjoern H</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Horvath, Izabela</au><au>Paetzold, Johannes C</au><au>Schoppe, Oliver</au><au>Al-Maskari, Rami</au><au>Ezhov, Ivan</au><au>Shit, Suprosanna</au><au>Li, Hongwei</au><au>Ertuerk, Ali</au><au>Menze, Bjoern H</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy</atitle><date>2021-04-22</date><risdate>2021</risdate><abstract>Novel multimodal imaging methods are capable of generating extensive, super
high resolution datasets for preclinical research. Yet, a massive lack of
annotations prevents the broad use of deep learning to analyze such data. So
far, existing generative models fail to mitigate this problem because of
frequent labeling errors. In this paper, we introduce a novel generative method
which leverages real anatomical information to generate realistic image-label
pairs of tumours. We construct a dual-pathway generator, for the anatomical
image and label, trained in a cycle-consistent setup, constrained by an
independent, pretrained segmentor. The generated images yield significant
quantitative improvement compared to existing methods. To validate the quality
of synthesis, we train segmentation networks on a dataset augmented with the
synthetic data, substantially improving the segmentation over baseline.</abstract><doi>10.48550/arxiv.2104.10993</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2104.10993 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2104_10993 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T21%3A57%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=METGAN:%20Generative%20Tumour%20Inpainting%20and%20Modality%20Synthesis%20in%20Light%20Sheet%20Microscopy&rft.au=Horvath,%20Izabela&rft.date=2021-04-22&rft_id=info:doi/10.48550/arxiv.2104.10993&rft_dat=%3Carxiv_GOX%3E2104_10993%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |