Deep learning‐based harmonization of trabecular bone microstructures between high‐ and low‐resolution CT imaging
Background Osteoporosis is a bone disease related to increased bone loss and fracture‐risk. The variability in bone strength is partially explained by bone mineral density (BMD), and the remainder is contributed by bone microstructure. Recently, clinical CT has emerged as a viable option for in vivo...
Gespeichert in:
Veröffentlicht in: | Medical physics (Lancaster) 2024-06, Vol.51 (6), p.4258-4270 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Background
Osteoporosis is a bone disease related to increased bone loss and fracture‐risk. The variability in bone strength is partially explained by bone mineral density (BMD), and the remainder is contributed by bone microstructure. Recently, clinical CT has emerged as a viable option for in vivo bone microstructural imaging. Wide variations in spatial‐resolution and other imaging features among different CT scanners add inconsistency to derived bone microstructural metrics, urging the need for harmonization of image data from different scanners.
Purpose
This paper presents a new deep learning (DL) method for the harmonization of bone microstructural images derived from low‐ and high‐resolution CT scanners and evaluates the method's performance at the levels of image data as well as derived microstructural metrics.
Methods
We generalized a three‐dimensional (3D) version of GAN‐CIRCLE that applies two generative adversarial networks (GANs) constrained by the identical, residual, and cycle learning ensemble (CIRCLE). Two GAN modules simultaneously learn to map low‐resolution CT (LRCT) to high‐resolution CT (HRCT) and vice versa. Twenty volunteers were recruited. LRCT and HRCT scans of the distal tibia of their left legs were acquired. Five‐hundred pairs of LRCT and HRCT image blocks of 64×64×64$64 \times 64 \times 64 $ voxels were sampled for each of the twelve volunteers and used for training in supervised as well as unsupervised setups. LRCT and HRCT images of the remaining eight volunteers were used for evaluation. LRCT blocks were sampled at 32 voxel intervals in each coordinate direction and predicted HRCT blocks were stitched to generate a predicted HRCT image.
Results
Mean ± standard deviation of structural similarity (SSIM) values between predicted and true HRCT using both 3DGAN‐CIRCLE‐based supervised (0.84 ± 0.03) and unsupervised (0.83 ± 0.04) methods were significantly (p |
---|---|
ISSN: | 0094-2405 2473-4209 |
DOI: | 10.1002/mp.17003 |