Towards Informing an Intuitive Mission Planning Interface for Autonomous Multi-Asset Teams via Image Descriptions

Establishing a basis for certification of autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR). The Human-Machine Interface (HMI) team is working to capture and utilize the multitude of ways in wh...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Vie, Lisa R. Le, Last, Mary Carolyn, Barrows, Bryan B., Allen, B. Danette
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Establishing a basis for certification of autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR). The Human-Machine Interface (HMI) team is working to capture and utilize the multitude of ways in which humans are already comfortable communicating mission goals and translate that into an intuitive mission planning interface. Several input/output modalities (speech/audio, typing/text, touch, and gesture) are being considered and investigated in the context human-machine teaming for the ATTRACTOR design reference mission (DRM) of Search and Rescue or (more generally) intelligence, surveillance, and reconnaissance (ISR). The first of these investigations, the Human Informed Natural-language GANs Evaluation (HINGE) data collection effort, is aimed at building an image description database to train a Generative Adversarial Network (GAN). In addition to building an image description database, the HMI team was interested if, and how, modality (spoken vs. written) affects different aspects of the image description given. The results will be analyzed to better inform the designing of an interface for mission planning.