Multimodal fusion for anticipating human decision performance

Anticipating human decisions while performing complex tasks remains a formidable challenge. This study proposes a multimodal machine-learning approach that leverages image features and electroencephalography (EEG) data to predict human response correctness in a demanding visual searching task. Notab...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports 2024-06, Vol.14 (1), p.13217-16
Hauptverfasser: Tran, Xuan-The, Do, Thomas, Pal, Nikhil R., Jung, Tzyy-Ping, Lin, Chin-Teng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Anticipating human decisions while performing complex tasks remains a formidable challenge. This study proposes a multimodal machine-learning approach that leverages image features and electroencephalography (EEG) data to predict human response correctness in a demanding visual searching task. Notably, we extract a novel set of image features pertaining to object relationships using the Segment Anything Model (SAM), which enhances prediction accuracy compared to traditional features. Additionally, our approach effectively utilizes a combination of EEG signals and image features to streamline the feature set required for the Random Forest Classifier (RFC) while maintaining high accuracy. The findings of this research hold substantial potential for developing advanced fault alert systems, particularly in critical decision-making environments such as the medical and defence sectors.
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-63651-2