A smartphone‐based self‐administered test of verbal episodic memory: Development and initial validation
Background Remote, self‐administered assessment of cognitive impairment offers immense value to all those impacted by Alzheimer’s disease: patients, carers, healthcare professionals, researchers and pharmaceutical companies alike. The high cost, slow pace and inaccessibility of traditional methods f...
Gespeichert in:
Veröffentlicht in: | Alzheimer's & dementia 2021-12, Vol.17 (S11), p.e056040-n/a |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Background
Remote, self‐administered assessment of cognitive impairment offers immense value to all those impacted by Alzheimer’s disease: patients, carers, healthcare professionals, researchers and pharmaceutical companies alike. The high cost, slow pace and inaccessibility of traditional methods for evaluating novel assessment tools maintains a chasm between innovation and real‐world impact. Focusing on user‐centered design development and technical feature optimisation, we present the next generation of a digitisation of the examiner‐administered Rey Auditory Verbal Learning Test (RAVLT) (Morrison et al., 2018; Mackin et al., 2017) as a smartphone‐based self‐administered test of verbal episodic memory.
Method
A smartphone‐based version of RAVLT was developed, integrating artificial intelligence (AI), psychometric assessment principles and user‐centric design. Key risks to user acceptability were identified with moderated and unmoderated testing of the user interface (UI) in 11 users (aged 50+). AI speech‐recognition solutions implemented to replace traditional human components of RAVLT were tested for accuracy, via comparison to a human listener, in a real‐world environment (N = 13).
Result
All testers were able to complete the assessment without additional assistance, indicating high UI acceptability. Qualitative outcomes highlighted potential patient anxiety, insights which were used to iterate the UI design. AI solutions, when implemented alongside data‐driven adjustment features, achieved speech‐recognition accuracy of over 92%.
Conclusion
We present extrapolatable principles for developers of remote assessment tools. Thorough inclusion of patients in the development process is essential to ensure that technical and clinical accuracy can be mirrored by optimal adherence and patient experience. Novel technical solutions required to adapt clinic‐based tests for remote use must be validated in ecologically appropriate environments. A fully remote validation study is underway to assess user acceptability and speech‐recognition accuracy at scale in people with subjective memory complaints (SMC). These methods and results hold promise for the continued development of effective at‐home digital tools for remote assessment and reduction of time‐to‐market for innovations in Alzheimer’s disease and beyond. References: Morrison et al., 2018, Alzheimer’s Dement (Amst), 10, 647‐656. doi.org/10.1016/j.dadm.2018.08.010 Mackin et al., 2017, Alzheimer's & Dementia, 1 |
---|---|
ISSN: | 1552-5260 1552-5279 |
DOI: | 10.1002/alz.056040 |