Rapid hyperspectral photothermal mid-infrared spectroscopic imaging from sparse data for gynecologic cancer tissue subtyping
Ovarian cancer detection has traditionally relied on a multi-step process that includes biopsy, tissue staining, and morphological analysis by experienced pathologists. While widely practiced, this conventional approach suffers from several drawbacks: it is qualitative, time-intensive, and heavily d...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Ovarian cancer detection has traditionally relied on a multi-step process
that includes biopsy, tissue staining, and morphological analysis by
experienced pathologists. While widely practiced, this conventional approach
suffers from several drawbacks: it is qualitative, time-intensive, and heavily
dependent on the quality of staining. Mid-infrared (MIR) hyperspectral
photothermal imaging is a label-free, biochemically quantitative technology
that, when combined with machine learning algorithms, can eliminate the need
for staining and provide quantitative results comparable to traditional
histology. However, this technology is slow. This work presents a novel
approach to MIR photothermal imaging that enhances its speed by an order of
magnitude. Our method significantly accelerates data collection by capturing a
combination of high-resolution and interleaved, lower-resolution infrared band
images and applying computational techniques for data interpolation. We
effectively minimize data collection requirements by leveraging sparse data
acquisition and employing curvelet-based reconstruction algorithms. This method
enables the reconstruction of high-quality, high-resolution images from
undersampled datasets and achieving a 10X improvement in data acquisition time.
We assessed the performance of our sparse imaging methodology using a variety
of quantitative metrics, including mean squared error (MSE), structural
similarity index (SSIM), and tissue subtype classification accuracies,
employing both random forest and convolutional neural network (CNN) models,
accompanied by ROC curves. Our statistically robust analysis, based on data
from 100 ovarian cancer patient samples and over 65 million data points,
demonstrates the method's capability to produce superior image quality and
accurately distinguish between different gynecological tissue types with
segmentation accuracy exceeding 95%. |
---|---|
DOI: | 10.48550/arxiv.2402.17960 |