Deep-Learning-Based Virtual Refocusing of Images Using an Engineered Point-Spread Function

We present a virtual refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF). This network model, referred to as W-Net, is composed of two cascaded generator and discriminator network pairs. The first generator net...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACS photonics 2021-07, Vol.8 (7), p.2174-2182
Hauptverfasser: Yang, Xilin, Huang, Luzhe, Luo, Yilin, Wu, Yichen, Wang, Hongda, Rivenson, Yair, Ozcan, Aydogan
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2182
container_issue 7
container_start_page 2174
container_title ACS photonics
container_volume 8
creator Yang, Xilin
Huang, Luzhe
Luo, Yilin
Wu, Yichen
Wang, Hongda
Rivenson, Yair
Ozcan, Aydogan
description We present a virtual refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF). This network model, referred to as W-Net, is composed of two cascaded generator and discriminator network pairs. The first generator network learns to virtually refocus an input image onto a user-defined plane, while the second generator learns to perform a cross-modality image transformation, improving the lateral resolution of the output image. Using this W-Net model with DH-PSF engineering, we experimentally extended the DOF of a fluorescence microscope by ∼20-fold. In addition to DH-PSF, we also report the application of this method to another spatially engineered imaging system that uses a tetrapod point-spread function. This approach can be widely used to develop deep-learning-enabled reconstruction methods for localization microscopy techniques that utilize engineered PSFs to considerably improve their imaging performance, including the spatial resolution and volumetric imaging throughput.
doi_str_mv 10.1021/acsphotonics.1c00660
format Article
fullrecord <record><control><sourceid>acs_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1021_acsphotonics_1c00660</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>b795222238</sourcerecordid><originalsourceid>FETCH-LOGICAL-a358t-e171bb51239a4ceb331c292d319a8dc836bc3eeeac85c895ef8d65c965ced46a3</originalsourceid><addsrcrecordid>eNp9kM9OwzAMhyMEEtPYG3DIC2TkTxvaI4wNJlUCAePApXJTt2TakippD7w9he2wEwfLtn76LOsj5FrwueBS3ICJ3ZfvvbMmzoXhXGt-RiZSKc4SLuX5yXxJZjFuOeeCp0rrZEI-HxA7ViAEZ13L7iFiTT9s6AfY0VdsvBniGFDf0PUeWox087eDo0vXWocYRuDFW9ezty4g1HQ1ONNb767IRQO7iLNjn5LNavm-eGLF8-N6cVcwUGnWMxS3oqpSIVUOicFKKWFkLmslcshqkyldGYWIYLLUZHmKTVbr1ORjYZ1oUFOSHO6a4GMM2JRdsHsI36Xg5a-i8lRReVQ0YvyAjWm59UNw45P_Iz8-sW9p</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep-Learning-Based Virtual Refocusing of Images Using an Engineered Point-Spread Function</title><source>American Chemical Society Journals</source><creator>Yang, Xilin ; Huang, Luzhe ; Luo, Yilin ; Wu, Yichen ; Wang, Hongda ; Rivenson, Yair ; Ozcan, Aydogan</creator><creatorcontrib>Yang, Xilin ; Huang, Luzhe ; Luo, Yilin ; Wu, Yichen ; Wang, Hongda ; Rivenson, Yair ; Ozcan, Aydogan</creatorcontrib><description>We present a virtual refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF). This network model, referred to as W-Net, is composed of two cascaded generator and discriminator network pairs. The first generator network learns to virtually refocus an input image onto a user-defined plane, while the second generator learns to perform a cross-modality image transformation, improving the lateral resolution of the output image. Using this W-Net model with DH-PSF engineering, we experimentally extended the DOF of a fluorescence microscope by ∼20-fold. In addition to DH-PSF, we also report the application of this method to another spatially engineered imaging system that uses a tetrapod point-spread function. This approach can be widely used to develop deep-learning-enabled reconstruction methods for localization microscopy techniques that utilize engineered PSFs to considerably improve their imaging performance, including the spatial resolution and volumetric imaging throughput.</description><identifier>ISSN: 2330-4022</identifier><identifier>EISSN: 2330-4022</identifier><identifier>DOI: 10.1021/acsphotonics.1c00660</identifier><language>eng</language><publisher>American Chemical Society</publisher><ispartof>ACS photonics, 2021-07, Vol.8 (7), p.2174-2182</ispartof><rights>2021 American Chemical Society</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a358t-e171bb51239a4ceb331c292d319a8dc836bc3eeeac85c895ef8d65c965ced46a3</citedby><cites>FETCH-LOGICAL-a358t-e171bb51239a4ceb331c292d319a8dc836bc3eeeac85c895ef8d65c965ced46a3</cites><orcidid>0000-0002-9343-5489 ; 0000-0002-0717-683X ; 0000-0002-6954-9662</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://pubs.acs.org/doi/pdf/10.1021/acsphotonics.1c00660$$EPDF$$P50$$Gacs$$H</linktopdf><linktohtml>$$Uhttps://pubs.acs.org/doi/10.1021/acsphotonics.1c00660$$EHTML$$P50$$Gacs$$H</linktohtml><link.rule.ids>314,776,780,2752,27053,27901,27902,56713,56763</link.rule.ids></links><search><creatorcontrib>Yang, Xilin</creatorcontrib><creatorcontrib>Huang, Luzhe</creatorcontrib><creatorcontrib>Luo, Yilin</creatorcontrib><creatorcontrib>Wu, Yichen</creatorcontrib><creatorcontrib>Wang, Hongda</creatorcontrib><creatorcontrib>Rivenson, Yair</creatorcontrib><creatorcontrib>Ozcan, Aydogan</creatorcontrib><title>Deep-Learning-Based Virtual Refocusing of Images Using an Engineered Point-Spread Function</title><title>ACS photonics</title><addtitle>ACS Photonics</addtitle><description>We present a virtual refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF). This network model, referred to as W-Net, is composed of two cascaded generator and discriminator network pairs. The first generator network learns to virtually refocus an input image onto a user-defined plane, while the second generator learns to perform a cross-modality image transformation, improving the lateral resolution of the output image. Using this W-Net model with DH-PSF engineering, we experimentally extended the DOF of a fluorescence microscope by ∼20-fold. In addition to DH-PSF, we also report the application of this method to another spatially engineered imaging system that uses a tetrapod point-spread function. This approach can be widely used to develop deep-learning-enabled reconstruction methods for localization microscopy techniques that utilize engineered PSFs to considerably improve their imaging performance, including the spatial resolution and volumetric imaging throughput.</description><issn>2330-4022</issn><issn>2330-4022</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kM9OwzAMhyMEEtPYG3DIC2TkTxvaI4wNJlUCAePApXJTt2TakippD7w9he2wEwfLtn76LOsj5FrwueBS3ICJ3ZfvvbMmzoXhXGt-RiZSKc4SLuX5yXxJZjFuOeeCp0rrZEI-HxA7ViAEZ13L7iFiTT9s6AfY0VdsvBniGFDf0PUeWox087eDo0vXWocYRuDFW9ezty4g1HQ1ONNb767IRQO7iLNjn5LNavm-eGLF8-N6cVcwUGnWMxS3oqpSIVUOicFKKWFkLmslcshqkyldGYWIYLLUZHmKTVbr1ORjYZ1oUFOSHO6a4GMM2JRdsHsI36Xg5a-i8lRReVQ0YvyAjWm59UNw45P_Iz8-sW9p</recordid><startdate>20210721</startdate><enddate>20210721</enddate><creator>Yang, Xilin</creator><creator>Huang, Luzhe</creator><creator>Luo, Yilin</creator><creator>Wu, Yichen</creator><creator>Wang, Hongda</creator><creator>Rivenson, Yair</creator><creator>Ozcan, Aydogan</creator><general>American Chemical Society</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-9343-5489</orcidid><orcidid>https://orcid.org/0000-0002-0717-683X</orcidid><orcidid>https://orcid.org/0000-0002-6954-9662</orcidid></search><sort><creationdate>20210721</creationdate><title>Deep-Learning-Based Virtual Refocusing of Images Using an Engineered Point-Spread Function</title><author>Yang, Xilin ; Huang, Luzhe ; Luo, Yilin ; Wu, Yichen ; Wang, Hongda ; Rivenson, Yair ; Ozcan, Aydogan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a358t-e171bb51239a4ceb331c292d319a8dc836bc3eeeac85c895ef8d65c965ced46a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Yang, Xilin</creatorcontrib><creatorcontrib>Huang, Luzhe</creatorcontrib><creatorcontrib>Luo, Yilin</creatorcontrib><creatorcontrib>Wu, Yichen</creatorcontrib><creatorcontrib>Wang, Hongda</creatorcontrib><creatorcontrib>Rivenson, Yair</creatorcontrib><creatorcontrib>Ozcan, Aydogan</creatorcontrib><collection>CrossRef</collection><jtitle>ACS photonics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Xilin</au><au>Huang, Luzhe</au><au>Luo, Yilin</au><au>Wu, Yichen</au><au>Wang, Hongda</au><au>Rivenson, Yair</au><au>Ozcan, Aydogan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep-Learning-Based Virtual Refocusing of Images Using an Engineered Point-Spread Function</atitle><jtitle>ACS photonics</jtitle><addtitle>ACS Photonics</addtitle><date>2021-07-21</date><risdate>2021</risdate><volume>8</volume><issue>7</issue><spage>2174</spage><epage>2182</epage><pages>2174-2182</pages><issn>2330-4022</issn><eissn>2330-4022</eissn><abstract>We present a virtual refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF). This network model, referred to as W-Net, is composed of two cascaded generator and discriminator network pairs. The first generator network learns to virtually refocus an input image onto a user-defined plane, while the second generator learns to perform a cross-modality image transformation, improving the lateral resolution of the output image. Using this W-Net model with DH-PSF engineering, we experimentally extended the DOF of a fluorescence microscope by ∼20-fold. In addition to DH-PSF, we also report the application of this method to another spatially engineered imaging system that uses a tetrapod point-spread function. This approach can be widely used to develop deep-learning-enabled reconstruction methods for localization microscopy techniques that utilize engineered PSFs to considerably improve their imaging performance, including the spatial resolution and volumetric imaging throughput.</abstract><pub>American Chemical Society</pub><doi>10.1021/acsphotonics.1c00660</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0002-9343-5489</orcidid><orcidid>https://orcid.org/0000-0002-0717-683X</orcidid><orcidid>https://orcid.org/0000-0002-6954-9662</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2330-4022
ispartof ACS photonics, 2021-07, Vol.8 (7), p.2174-2182
issn 2330-4022
2330-4022
language eng
recordid cdi_crossref_primary_10_1021_acsphotonics_1c00660
source American Chemical Society Journals
title Deep-Learning-Based Virtual Refocusing of Images Using an Engineered Point-Spread Function
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T18%3A04%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acs_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep-Learning-Based%20Virtual%20Refocusing%20of%20Images%20Using%20an%20Engineered%20Point-Spread%20Function&rft.jtitle=ACS%20photonics&rft.au=Yang,%20Xilin&rft.date=2021-07-21&rft.volume=8&rft.issue=7&rft.spage=2174&rft.epage=2182&rft.pages=2174-2182&rft.issn=2330-4022&rft.eissn=2330-4022&rft_id=info:doi/10.1021/acsphotonics.1c00660&rft_dat=%3Cacs_cross%3Eb795222238%3C/acs_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true