Towards automation of dynamic-gaze video analysis taking functional upper-limb tasks as a case study
•The semi-automated coding algorithm for the dynamic-gaze video analysis process minimizes potential human errors compared to manual coding.•Strong agreement with the manual coding analysis (Cohen's Kappa > 0.8) confirms the validity of the developed coding algorithm.•Significant time saving...
Gespeichert in:
Veröffentlicht in: | Computer methods and programs in biomedicine 2021-05, Vol.203, p.106041-106041, Article 106041 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •The semi-automated coding algorithm for the dynamic-gaze video analysis process minimizes potential human errors compared to manual coding.•Strong agreement with the manual coding analysis (Cohen's Kappa > 0.8) confirms the validity of the developed coding algorithm.•Significant time saving for eliciting clinically meaningful results from the gaze data for upper-limb prosthesis users.•A combination of image processing techniques and a fuzzy-logic controller addresses the dynamic changes in the location and orientation of the AOIs.•Provide a solid base for full automation of the dynamic-gaze video analysis process.
Previous studies in motor control have yielded clear evidence that gaze behavior (where someone looks) quantifies the attention paid to perform actions. However, eliciting clinically meaningful results from the gaze data has been done manually, rendering it incredibly tedious, time-consuming, and highly subjective. This paper aims to study the feasibility of automating the coding process of the gaze data taking functional upper-limb tasks as a case study.
This is achieved by developing a new algorithm capable of coding the collected gaze data through three main stages; data preparation, data processing, and output generation. The input data in the form of a crosshair and a gaze video are converted into a 25 Hz frame rate sequence. Keyframes and non-key frames are then obtained and processed using a combination of image processing techniques and a fuzzy logic controller. In each trial, the location and duration of gaze fixation at the areas of interest (AOIs) are obtained. Once the gaze data is coded, it can be presented in different forms and formats, including the stacked color bar.
The obtained results showed that the developed coding algorithm highly agrees with the manual coding method but significantly faster and less prone to unsystematic errors. Statistical analysis showed that Cohen's Kappa ranges from 0.705 to 1.0. Moreover, based on the intra-class correlation coefficient (ICC), the agreement index between computerized and manual coding methods is found to be (i) 0.908 with 95% confidence intervals (0.867, 0.937) for the anatomical hand and (ii) 0.923 with 95% confidence intervals (0.888, 0.948) for the prosthetic hand. A Bland-Altman plot also showed that all data points are closely scattered around the mean. These findings confirm the validity and effectiveness of the developed coding algorithm.
The developed algorithm demonstrated t |
---|---|
ISSN: | 0169-2607 1872-7565 |
DOI: | 10.1016/j.cmpb.2021.106041 |