A deep neural network for parametric image reconstruction on a large axial field-of-view PET
Purpose The PET scanners with long axial field of view (AFOV) having ~ 20 times higher sensitivity than conventional scanners provide new opportunities for enhanced parametric imaging but suffer from the dramatically increased volume and complexity of dynamic data. This study reconstructed a high-qu...
Gespeichert in:
Veröffentlicht in: | European journal of nuclear medicine and molecular imaging 2023-02, Vol.50 (3), p.701-714 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Purpose
The PET scanners with long axial field of view (AFOV) having ~ 20 times higher sensitivity than conventional scanners provide new opportunities for enhanced parametric imaging but suffer from the dramatically increased volume and complexity of dynamic data. This study reconstructed a high-quality direct Patlak Ki image from five-frame sinograms without input function by a deep learning framework based on DeepPET to explore the potential of artificial intelligence reducing the acquisition time and the dependence of input function in parametric imaging.
Methods
This study was implemented on a large AFOV PET/CT scanner (Biograph Vision Quadra) and twenty patients were recruited with
18
F-fluorodeoxyglucose (
18
F-FDG) dynamic scans. During training and testing of the proposed deep learning framework, the last five-frame (25 min, 40–65 min post-injection) sinograms were set as input and the reconstructed Patlak Ki images by a nested EM algorithm on the vendor were set as ground truth. To evaluate the image quality of predicted Ki images, mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) were calculated. Meanwhile, a linear regression process was applied between predicted and true Ki means on avid malignant lesions and tumor volume of interests (VOIs).
Results
In the testing phase, the proposed method achieved excellent MSE of less than 0.03%, high SSIM, and PSNR of ~ 0.98 and ~ 38 dB, respectively. Moreover, there was a high correlation (DeepPET:
R
2
= 0.73, self-attention DeepPET:
R
2
=0.82) between predicted Ki and traditionally reconstructed Patlak Ki means over eleven lesions.
Conclusions
The results show that the deep learning–based method produced high-quality parametric images from small frames of projection data without input function. It has much potential to address the dilemma of the long scan time and dependency on input function that still hamper the clinical translation of dynamic PET. |
---|---|
ISSN: | 1619-7070 1619-7089 |
DOI: | 10.1007/s00259-022-06003-4 |