Vision-Based Freezing of Gait Detection With Anatomic Directed Graph Representation

Parkinson's disease significantly impacts the life quality of millions of people around the world. While freezing of gait (FoG) is one of the most common symptoms of the disease, it is time consuming and subjective to assess FoG for well-trained experts. Therefore, it is highly desirable to dev...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal of biomedical and health informatics 2020-04, Vol.24 (4), p.1215-1225
Hauptverfasser: Hu, Kun, Wang, Zhiyong, Mei, Shaohui, Ehgoetz Martens, Kaylena A., Yao, Tingting, Lewis, Simon J. G., Feng, David Dagan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Parkinson's disease significantly impacts the life quality of millions of people around the world. While freezing of gait (FoG) is one of the most common symptoms of the disease, it is time consuming and subjective to assess FoG for well-trained experts. Therefore, it is highly desirable to devise computer-aided FoG detection methods for the purpose of objective and time-efficient assessment. In this paper, in line with the gold standard of FoG clinical assessment, which requires video or direct observation, we propose one of the first vision-based methods for automatic FoG detection. To better characterize FoG patterns, instead of learning an overall representation of a video, we propose a novel architecture of graph convolution neural network and represent each video as a directed graph where FoG related candidate regions are the vertices. A weakly-supervised learning strategy and a weighted adjacency matrix estimation layer are proposed to eliminate the resource expensive data annotation required for fully supervised learning. As a result, the interference of visual information irrelevant to FoG, such as gait motion of supporting staff involved in clinical assessments, has been reduced to improve FoG detection performance by identifying the vertices contributing to FoG events. To further improve the performance, the global context of a clinical video is also considered and several fusion strategies with graph predictions are investigated. Experimental results on more than 100 videos collected from 45 patients during a clinical assessment demonstrated promising performance of our proposed method with an AUC of 0.887.
ISSN:2168-2194
2168-2208
DOI:10.1109/JBHI.2019.2923209