Multi-tasking deep network for tinnitus classification and severity prediction from multimodal structural MR images

Subjective tinnitus is an auditory phantom perceptual disorder without an objective biomarker. Fast and efficient diagnostic tools will advance clinical practice by detecting or confirming the condition, tracking change in severity, and monitoring treatment response. Motivated by evidence of subtle...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of neural engineering 2023-02, Vol.20 (1), p.16017
Hauptverfasser: Lin, Chieh-Te, Ghosh, Sanjay, Hinkley, Leighton B, Dale, Corby L, Souza, Ana C S, Sabes, Jennifer H, Hess, Christopher P, Adams, Meredith E, Cheung, Steven W, Nagarajan, Srikantan S
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Subjective tinnitus is an auditory phantom perceptual disorder without an objective biomarker. Fast and efficient diagnostic tools will advance clinical practice by detecting or confirming the condition, tracking change in severity, and monitoring treatment response. Motivated by evidence of subtle anatomical, morphological, or functional information in magnetic resonance images of the brain, we examine data-driven machine learning methods for joint tinnitus classification (tinnitus or no tinnitus) and tinnitus severity prediction. We propose a deep multi-task multimodal framework for tinnitus classification and severity prediction using structural MRI (sMRI) data. To leverage complementary information multimodal neuroimaging data, we integrate two modalities of three-dimensional sMRI-T1 weighted (T1w) and T2 weighted (T2w) images. To explore the key components in the MR images that drove task performance, we segment both T1w and T2w images into three different components-cerebrospinal fluid, grey matter and white matter, and evaluate performance of each segmented image. Results demonstrate that our multimodal framework capitalizes on the information across both modalities (T1w and T2w) for the joint task of tinnitus classification and severity prediction. Our model outperforms existing learning-based and conventional methods in terms of accuracy, sensitivity, specificity, and negative predictive value.
ISSN:1741-2560
1741-2552
DOI:10.1088/1741-2552/acab33