Conditioning Generative Latent Optimization for Sparse-View CT Image Reconstruction

Computed Tomography (CT) is a prominent example of Imaging Inverse Problem highlighting the unrivaled performances of data-driven methods in degraded measurements setups like sparse X-ray projections. Although a significant proportion of deep learning approaches benefit from large supervised dataset...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-04
Hauptverfasser: Braure, Thomas, Lazaro, Delphine, Hateau, David, Brandon, Vincent, Ginsburger, Kévin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Braure, Thomas
Lazaro, Delphine
Hateau, David
Brandon, Vincent
Ginsburger, Kévin
description Computed Tomography (CT) is a prominent example of Imaging Inverse Problem highlighting the unrivaled performances of data-driven methods in degraded measurements setups like sparse X-ray projections. Although a significant proportion of deep learning approaches benefit from large supervised datasets, they cannot generalize to new experimental setups. In contrast, fully unsupervised techniques, most notably using score-based generative models, have recently demonstrated similar or better performances compared to supervised approaches while being flexible at test time. However, their use cases are limited as they need considerable amounts of training data to have good generalization properties. Another unsupervised approach taking advantage of the implicit natural bias of deep convolutional networks, Deep Image Prior, has recently been adapted to solve sparse CT by reparameterizing the reconstruction problem. Although this methodology does not require any training dataset, it enforces a weaker prior on the reconstructions when compared to data-driven methods. To fill the gap between these two strategies, we propose an unsupervised conditional approach to the Generative Latent Optimization framework (cGLO). Similarly to DIP, without any training dataset, cGLO benefits from the structural bias of a decoder network. However, the prior is further reinforced as the effect of a likelihood objective shared between multiple slices being reconstructed simultaneously through the same decoder network. In addition, the parameters of the decoder may be initialized on an unsupervised, and eventually very small, training dataset to enhance the reconstruction. The resulting approach is tested on full-dose sparse-view CT using multiple training dataset sizes and varying numbers of viewing angles.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2844446245</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2844446245</sourcerecordid><originalsourceid>FETCH-proquest_journals_28444462453</originalsourceid><addsrcrecordid>eNqNys0KgkAUhuEhCJLyHg60FmxGzb30B0GQ0lYGO8pIztjMsaCrT6EL6Nu8i--ZMY8LsQnSiPMF851rwzDkyZbHsfBYnhl9V6SMVrqBA2q0ktQL4SwJNcGlJ9Wpj5wE1MZC3kvrMLgpfENWwKmTDcIVK6Md2aGa3IrNa_lw6P-6ZOv9rsiOQW_Nc0BHZWsGq8er5Gk0LuFRLP5TX2WdQGU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2844446245</pqid></control><display><type>article</type><title>Conditioning Generative Latent Optimization for Sparse-View CT Image Reconstruction</title><source>Free E- Journals</source><creator>Braure, Thomas ; Lazaro, Delphine ; Hateau, David ; Brandon, Vincent ; Ginsburger, Kévin</creator><creatorcontrib>Braure, Thomas ; Lazaro, Delphine ; Hateau, David ; Brandon, Vincent ; Ginsburger, Kévin</creatorcontrib><description>Computed Tomography (CT) is a prominent example of Imaging Inverse Problem highlighting the unrivaled performances of data-driven methods in degraded measurements setups like sparse X-ray projections. Although a significant proportion of deep learning approaches benefit from large supervised datasets, they cannot generalize to new experimental setups. In contrast, fully unsupervised techniques, most notably using score-based generative models, have recently demonstrated similar or better performances compared to supervised approaches while being flexible at test time. However, their use cases are limited as they need considerable amounts of training data to have good generalization properties. Another unsupervised approach taking advantage of the implicit natural bias of deep convolutional networks, Deep Image Prior, has recently been adapted to solve sparse CT by reparameterizing the reconstruction problem. Although this methodology does not require any training dataset, it enforces a weaker prior on the reconstructions when compared to data-driven methods. To fill the gap between these two strategies, we propose an unsupervised conditional approach to the Generative Latent Optimization framework (cGLO). Similarly to DIP, without any training dataset, cGLO benefits from the structural bias of a decoder network. However, the prior is further reinforced as the effect of a likelihood objective shared between multiple slices being reconstructed simultaneously through the same decoder network. In addition, the parameters of the decoder may be initialized on an unsupervised, and eventually very small, training dataset to enhance the reconstruction. The resulting approach is tested on full-dose sparse-view CT using multiple training dataset sizes and varying numbers of viewing angles.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computed tomography ; Datasets ; Image reconstruction ; Inverse problems ; Medical imaging ; Optimization ; Training</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Braure, Thomas</creatorcontrib><creatorcontrib>Lazaro, Delphine</creatorcontrib><creatorcontrib>Hateau, David</creatorcontrib><creatorcontrib>Brandon, Vincent</creatorcontrib><creatorcontrib>Ginsburger, Kévin</creatorcontrib><title>Conditioning Generative Latent Optimization for Sparse-View CT Image Reconstruction</title><title>arXiv.org</title><description>Computed Tomography (CT) is a prominent example of Imaging Inverse Problem highlighting the unrivaled performances of data-driven methods in degraded measurements setups like sparse X-ray projections. Although a significant proportion of deep learning approaches benefit from large supervised datasets, they cannot generalize to new experimental setups. In contrast, fully unsupervised techniques, most notably using score-based generative models, have recently demonstrated similar or better performances compared to supervised approaches while being flexible at test time. However, their use cases are limited as they need considerable amounts of training data to have good generalization properties. Another unsupervised approach taking advantage of the implicit natural bias of deep convolutional networks, Deep Image Prior, has recently been adapted to solve sparse CT by reparameterizing the reconstruction problem. Although this methodology does not require any training dataset, it enforces a weaker prior on the reconstructions when compared to data-driven methods. To fill the gap between these two strategies, we propose an unsupervised conditional approach to the Generative Latent Optimization framework (cGLO). Similarly to DIP, without any training dataset, cGLO benefits from the structural bias of a decoder network. However, the prior is further reinforced as the effect of a likelihood objective shared between multiple slices being reconstructed simultaneously through the same decoder network. In addition, the parameters of the decoder may be initialized on an unsupervised, and eventually very small, training dataset to enhance the reconstruction. The resulting approach is tested on full-dose sparse-view CT using multiple training dataset sizes and varying numbers of viewing angles.</description><subject>Computed tomography</subject><subject>Datasets</subject><subject>Image reconstruction</subject><subject>Inverse problems</subject><subject>Medical imaging</subject><subject>Optimization</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNys0KgkAUhuEhCJLyHg60FmxGzb30B0GQ0lYGO8pIztjMsaCrT6EL6Nu8i--ZMY8LsQnSiPMF851rwzDkyZbHsfBYnhl9V6SMVrqBA2q0ktQL4SwJNcGlJ9Wpj5wE1MZC3kvrMLgpfENWwKmTDcIVK6Md2aGa3IrNa_lw6P-6ZOv9rsiOQW_Nc0BHZWsGq8er5Gk0LuFRLP5TX2WdQGU</recordid><startdate>20240430</startdate><enddate>20240430</enddate><creator>Braure, Thomas</creator><creator>Lazaro, Delphine</creator><creator>Hateau, David</creator><creator>Brandon, Vincent</creator><creator>Ginsburger, Kévin</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240430</creationdate><title>Conditioning Generative Latent Optimization for Sparse-View CT Image Reconstruction</title><author>Braure, Thomas ; Lazaro, Delphine ; Hateau, David ; Brandon, Vincent ; Ginsburger, Kévin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28444462453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computed tomography</topic><topic>Datasets</topic><topic>Image reconstruction</topic><topic>Inverse problems</topic><topic>Medical imaging</topic><topic>Optimization</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Braure, Thomas</creatorcontrib><creatorcontrib>Lazaro, Delphine</creatorcontrib><creatorcontrib>Hateau, David</creatorcontrib><creatorcontrib>Brandon, Vincent</creatorcontrib><creatorcontrib>Ginsburger, Kévin</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied &amp; Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Braure, Thomas</au><au>Lazaro, Delphine</au><au>Hateau, David</au><au>Brandon, Vincent</au><au>Ginsburger, Kévin</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Conditioning Generative Latent Optimization for Sparse-View CT Image Reconstruction</atitle><jtitle>arXiv.org</jtitle><date>2024-04-30</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Computed Tomography (CT) is a prominent example of Imaging Inverse Problem highlighting the unrivaled performances of data-driven methods in degraded measurements setups like sparse X-ray projections. Although a significant proportion of deep learning approaches benefit from large supervised datasets, they cannot generalize to new experimental setups. In contrast, fully unsupervised techniques, most notably using score-based generative models, have recently demonstrated similar or better performances compared to supervised approaches while being flexible at test time. However, their use cases are limited as they need considerable amounts of training data to have good generalization properties. Another unsupervised approach taking advantage of the implicit natural bias of deep convolutional networks, Deep Image Prior, has recently been adapted to solve sparse CT by reparameterizing the reconstruction problem. Although this methodology does not require any training dataset, it enforces a weaker prior on the reconstructions when compared to data-driven methods. To fill the gap between these two strategies, we propose an unsupervised conditional approach to the Generative Latent Optimization framework (cGLO). Similarly to DIP, without any training dataset, cGLO benefits from the structural bias of a decoder network. However, the prior is further reinforced as the effect of a likelihood objective shared between multiple slices being reconstructed simultaneously through the same decoder network. In addition, the parameters of the decoder may be initialized on an unsupervised, and eventually very small, training dataset to enhance the reconstruction. The resulting approach is tested on full-dose sparse-view CT using multiple training dataset sizes and varying numbers of viewing angles.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2844446245
source Free E- Journals
subjects Computed tomography
Datasets
Image reconstruction
Inverse problems
Medical imaging
Optimization
Training
title Conditioning Generative Latent Optimization for Sparse-View CT Image Reconstruction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T19%3A00%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Conditioning%20Generative%20Latent%20Optimization%20for%20Sparse-View%20CT%20Image%20Reconstruction&rft.jtitle=arXiv.org&rft.au=Braure,%20Thomas&rft.date=2024-04-30&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2844446245%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2844446245&rft_id=info:pmid/&rfr_iscdi=true