Deep-Learning-Based Virtual Refocusing of Images Using an Engineered Point-Spread Function

We present a virtual refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF). This network model, referred to as W-Net, is composed of two cascaded generator and discriminator network pairs. The first generator net...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACS photonics 2021-07, Vol.8 (7), p.2174-2182
Hauptverfasser: Yang, Xilin, Huang, Luzhe, Luo, Yilin, Wu, Yichen, Wang, Hongda, Rivenson, Yair, Ozcan, Aydogan
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present a virtual refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF). This network model, referred to as W-Net, is composed of two cascaded generator and discriminator network pairs. The first generator network learns to virtually refocus an input image onto a user-defined plane, while the second generator learns to perform a cross-modality image transformation, improving the lateral resolution of the output image. Using this W-Net model with DH-PSF engineering, we experimentally extended the DOF of a fluorescence microscope by ∼20-fold. In addition to DH-PSF, we also report the application of this method to another spatially engineered imaging system that uses a tetrapod point-spread function. This approach can be widely used to develop deep-learning-enabled reconstruction methods for localization microscopy techniques that utilize engineered PSFs to considerably improve their imaging performance, including the spatial resolution and volumetric imaging throughput.
ISSN:2330-4022
2330-4022
DOI:10.1021/acsphotonics.1c00660