Classification of amyloid positivity in PET imaging using end‐to‐end deep learning: a multi‐cohort, multi‐tracer analysis
Background Positron emission tomography (PET) imaging with amyloid‐specific tracers is the gold standard for assessing amyloid positivity in vivo. However, visual interpretation of PET images is limited by availability of clinicians and interobserver variability. Classification of amyloid status typ...
Gespeichert in:
Veröffentlicht in: | Alzheimer's & dementia 2023-06, Vol.19 (S2), p.n/a |
---|---|
Hauptverfasser: | , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Background
Positron emission tomography (PET) imaging with amyloid‐specific tracers is the gold standard for assessing amyloid positivity in vivo. However, visual interpretation of PET images is limited by availability of clinicians and interobserver variability. Classification of amyloid status typically requires calculation of standardized uptake value ratios (SUVRs), correspondence with neuroanatomy from magnetic resonance imaging, and additional complex, computationally intensive preprocessing steps requiring specialized software. We developed a deep learning model that can automatically classify unprocessed PET images as amyloid positive or negative.
Method
Florbetapir (AV45), florbetaben (FBB), and Pittsburgh compound B (PiB) images from the Alzheimer’s Disease Neuroimaging Association (ADNI), AV45 and PiB images from the Open Access Series of Imaging Studies 3 (OASIS3), and FBB, flutametamol (FMT), and flutafuranol (NAV4694) images from the Australian Imaging Biomarkers and Lifestyle Study of Ageing (AIBL) were used (Table 1). Each image was resized to isotropic 2 mm voxels, and voxel intensities between the 5th and 95th percentile were normalized to zero mean and unit variance. We trained and internally validated a convolutional neural network (CNN) on AV45 images from ADNI before testing it on images from an independent cohort and/or acquired with a different tracer.
Result
The model achieved an AUC of 0.98 [0.97, 1.0] on test AV45 images from ADNI and generalized well to AV45 images from OASIS3, achieving an AUC of 0.95 [0.92, 0.97] (Table 2). Critically, the model’s performance remained robust even when evaluating imaging from different tracers, including FBB, FMT, PiB, and NAV4694, scoring an AUC of at least 0.97 on each and demonstrating its ability to learn a tracer‐agnostic pattern of abnormality. Similar conclusions can be drawn from other performance metrics.
Conclusion
We developed a deep learning model that can classify amyloid PET scans with a high degree of accuracy. We demonstrated that the model generalizes well to unseen images acquired from an independent cohort, under a different protocol, and/or with tracers unseen during training. Our model may usefully provide concise metrics for amyloid research as well as support clinical assessment of amyloid‐specific PET when expertise or computational resources are limited. |
---|---|
ISSN: | 1552-5260 1552-5279 |
DOI: | 10.1002/alz.067388 |