Feedback Assisted Adversarial Learning to Improve the Quality of Cone-beam CT Images

Unsupervised image translation using adversarial learning has been attracting attention to improve the image quality of medical images. However, adversarial training based on the global evaluation values of discriminators does not provide sufficient translation performance for locally different imag...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Hase, Takumi, Nakao, Megumi, Nakamura, Mitsuhiro, Matsuda, Tetsuya
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Hase, Takumi
Nakao, Megumi
Nakamura, Mitsuhiro
Matsuda, Tetsuya
description Unsupervised image translation using adversarial learning has been attracting attention to improve the image quality of medical images. However, adversarial training based on the global evaluation values of discriminators does not provide sufficient translation performance for locally different image features. We propose adversarial learning with a feedback mechanism from a discriminator to improve the quality of CBCT images. This framework employs U-net as the discriminator and outputs a probability map representing the local discrimination results. The probability map is fed back to the generator and used for training to improve the image translation. Our experiments using 76 corresponding CT-CBCT images confirmed that the proposed framework could capture more diverse image features than conventional adversarial learning frameworks and produced synthetic images with pixel values close to the reference image and a correlation coefficient of 0.93.
doi_str_mv 10.48550/arxiv.2210.12578
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2210_12578</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2210_12578</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-34aabeb8843740c045ccc4df1ffdab3e9bae1f9ad3011899f74351a0d987df7b3</originalsourceid><addsrcrecordid>eNotj8tKxDAYhbNxIaMP4Mq8QMekSU2yLMXLQEGE7suf5M8Y7GVIanHe3jq6OnA4fIePkDvO9lJXFXuA9B3XfVluBS8rpa9J94zoLbhPWucc84Ke1n7FlCFFGGiLkKY4Heky08N4SvOKdPlA-v4FQ1zOdA60mScsLMJIm27bwBHzDbkKMGS8_c_ddvPUNa9F-_ZyaOq2gEelCyEBLFqtpVCSOSYr55z0gYfgwQo0FpAHA14wzrUxQUlRcWDeaOWDsmJH7v-wF63-lOII6dz_6vUXPfEDWB1LAg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Feedback Assisted Adversarial Learning to Improve the Quality of Cone-beam CT Images</title><source>arXiv.org</source><creator>Hase, Takumi ; Nakao, Megumi ; Nakamura, Mitsuhiro ; Matsuda, Tetsuya</creator><creatorcontrib>Hase, Takumi ; Nakao, Megumi ; Nakamura, Mitsuhiro ; Matsuda, Tetsuya</creatorcontrib><description>Unsupervised image translation using adversarial learning has been attracting attention to improve the image quality of medical images. However, adversarial training based on the global evaluation values of discriminators does not provide sufficient translation performance for locally different image features. We propose adversarial learning with a feedback mechanism from a discriminator to improve the quality of CBCT images. This framework employs U-net as the discriminator and outputs a probability map representing the local discrimination results. The probability map is fed back to the generator and used for training to improve the image translation. Our experiments using 76 corresponding CT-CBCT images confirmed that the proposed framework could capture more diverse image features than conventional adversarial learning frameworks and produced synthetic images with pixel values close to the reference image and a correlation coefficient of 0.93.</description><identifier>DOI: 10.48550/arxiv.2210.12578</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2210.12578$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2210.12578$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hase, Takumi</creatorcontrib><creatorcontrib>Nakao, Megumi</creatorcontrib><creatorcontrib>Nakamura, Mitsuhiro</creatorcontrib><creatorcontrib>Matsuda, Tetsuya</creatorcontrib><title>Feedback Assisted Adversarial Learning to Improve the Quality of Cone-beam CT Images</title><description>Unsupervised image translation using adversarial learning has been attracting attention to improve the image quality of medical images. However, adversarial training based on the global evaluation values of discriminators does not provide sufficient translation performance for locally different image features. We propose adversarial learning with a feedback mechanism from a discriminator to improve the quality of CBCT images. This framework employs U-net as the discriminator and outputs a probability map representing the local discrimination results. The probability map is fed back to the generator and used for training to improve the image translation. Our experiments using 76 corresponding CT-CBCT images confirmed that the proposed framework could capture more diverse image features than conventional adversarial learning frameworks and produced synthetic images with pixel values close to the reference image and a correlation coefficient of 0.93.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tKxDAYhbNxIaMP4Mq8QMekSU2yLMXLQEGE7suf5M8Y7GVIanHe3jq6OnA4fIePkDvO9lJXFXuA9B3XfVluBS8rpa9J94zoLbhPWucc84Ke1n7FlCFFGGiLkKY4Heky08N4SvOKdPlA-v4FQ1zOdA60mScsLMJIm27bwBHzDbkKMGS8_c_ddvPUNa9F-_ZyaOq2gEelCyEBLFqtpVCSOSYr55z0gYfgwQo0FpAHA14wzrUxQUlRcWDeaOWDsmJH7v-wF63-lOII6dz_6vUXPfEDWB1LAg</recordid><startdate>20221022</startdate><enddate>20221022</enddate><creator>Hase, Takumi</creator><creator>Nakao, Megumi</creator><creator>Nakamura, Mitsuhiro</creator><creator>Matsuda, Tetsuya</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221022</creationdate><title>Feedback Assisted Adversarial Learning to Improve the Quality of Cone-beam CT Images</title><author>Hase, Takumi ; Nakao, Megumi ; Nakamura, Mitsuhiro ; Matsuda, Tetsuya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-34aabeb8843740c045ccc4df1ffdab3e9bae1f9ad3011899f74351a0d987df7b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Hase, Takumi</creatorcontrib><creatorcontrib>Nakao, Megumi</creatorcontrib><creatorcontrib>Nakamura, Mitsuhiro</creatorcontrib><creatorcontrib>Matsuda, Tetsuya</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hase, Takumi</au><au>Nakao, Megumi</au><au>Nakamura, Mitsuhiro</au><au>Matsuda, Tetsuya</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Feedback Assisted Adversarial Learning to Improve the Quality of Cone-beam CT Images</atitle><date>2022-10-22</date><risdate>2022</risdate><abstract>Unsupervised image translation using adversarial learning has been attracting attention to improve the image quality of medical images. However, adversarial training based on the global evaluation values of discriminators does not provide sufficient translation performance for locally different image features. We propose adversarial learning with a feedback mechanism from a discriminator to improve the quality of CBCT images. This framework employs U-net as the discriminator and outputs a probability map representing the local discrimination results. The probability map is fed back to the generator and used for training to improve the image translation. Our experiments using 76 corresponding CT-CBCT images confirmed that the proposed framework could capture more diverse image features than conventional adversarial learning frameworks and produced synthetic images with pixel values close to the reference image and a correlation coefficient of 0.93.</abstract><doi>10.48550/arxiv.2210.12578</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2210.12578
ispartof
issn
language eng
recordid cdi_arxiv_primary_2210_12578
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Feedback Assisted Adversarial Learning to Improve the Quality of Cone-beam CT Images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T04%3A33%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Feedback%20Assisted%20Adversarial%20Learning%20to%20Improve%20the%20Quality%20of%20Cone-beam%20CT%20Images&rft.au=Hase,%20Takumi&rft.date=2022-10-22&rft_id=info:doi/10.48550/arxiv.2210.12578&rft_dat=%3Carxiv_GOX%3E2210_12578%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true