Enabling Self-Practice of Digital Audio–Tactile Maps for Visually Impaired People by Large Language Models

Digital audio–tactile maps (DATMs) on touchscreen devices provide valuable opportunities for people who are visually impaired (PVIs) to explore the spatial environment for engaging in travel activities. Existing solutions for DATMs usually require extensive training for the PVIs to understand the fe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2024-06, Vol.13 (12), p.2395
Hauptverfasser: Tran, Chanh Minh, Bach, Nguyen Gia, Tan, Phan Xuan, Kamioka, Eiji, Kanamaru, Manami
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Digital audio–tactile maps (DATMs) on touchscreen devices provide valuable opportunities for people who are visually impaired (PVIs) to explore the spatial environment for engaging in travel activities. Existing solutions for DATMs usually require extensive training for the PVIs to understand the feedback mechanism. Due to the shortage of human resources for training specialists, as well as PVIs’ desire for frequent practice to maintain their usage skills, it has become challenging to widely adopt DATMs in real life. This paper discusses the use of large language models (LLMs) to provide a verbal evaluation of the PVIs’ perception, which is crucial for the independent practice of DATM usage. A smartphone-based prototype providing DATMs of simple floor plans was developed for a preliminary investigation. The evaluation results have proven that the interaction with the LLM could help the participants better understand the DATMs’ content and could vividly replicate them by drawings.
ISSN:2079-9292
2079-9292
DOI:10.3390/electronics13122395