Deep learning for improving PET/CT attenuation correction by elastic registration of anatomical data
Background For PET/CT, the CT transmission data are used to correct the PET emission data for attenuation. However, subject motion between the consecutive scans can cause problems for the PET reconstruction. A method to match the CT to the PET would reduce resulting artifacts in the reconstructed im...
Gespeichert in:
Veröffentlicht in: | European journal of nuclear medicine and molecular imaging 2023-07, Vol.50 (8), p.2292-2304 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Background
For PET/CT, the CT transmission data are used to correct the PET emission data for attenuation. However, subject motion between the consecutive scans can cause problems for the PET reconstruction. A method to match the CT to the PET would reduce resulting artifacts in the reconstructed images.
Purpose
This work presents a deep learning technique for inter-modality, elastic registration of PET/CT images for improving PET attenuation correction (AC). The feasibility of the technique is demonstrated for two applications: general whole-body (WB) imaging and cardiac myocardial perfusion imaging (MPI), with a specific focus on respiratory and gross voluntary motion.
Materials and methods
A convolutional neural network (CNN) was developed and trained for the registration task, comprising two distinct modules: a feature extractor and a displacement vector field (DVF) regressor. It took as input a non-attenuation-corrected PET/CT image pair and returned the relative DVF between them—it was trained in a supervised fashion using simulated inter-image motion. The 3D motion fields produced by the network were used to resample the CT image volumes, elastically warping them to spatially match the corresponding PET distributions. Performance of the algorithm was evaluated in different independent sets of WB clinical subject data: for recovering deliberate misregistrations imposed in motion-free PET/CT pairs and for improving reconstruction artifacts in cases with actual subject motion. The efficacy of this technique is also demonstrated for improving PET AC in cardiac MPI applications.
Results
A single registration network was found to be capable of handling a variety of PET tracers. It demonstrated state-of-the-art performance in the PET/CT registration task and was able to significantly reduce the effects of simulated motion imposed in motion-free, clinical data. Registering the CT to the PET distribution was also found to reduce various types of AC artifacts in the reconstructed PET images of subjects with actual motion. In particular, liver uniformity was improved in the subjects with significant observable respiratory motion. For MPI, the proposed approach yielded advantages for correcting artifacts in myocardial activity quantification and potentially for reducing the rate of the associated diagnostic errors.
Conclusion
This study demonstrated the feasibility of using deep learning for registering the anatomical image to improve AC in clinical PET/CT reco |
---|---|
ISSN: | 1619-7070 1619-7089 |
DOI: | 10.1007/s00259-023-06181-9 |