Representing Autonomous Systems’ Self-Confidence through Competency Boundaries

A method for determining the self-confidence of autonomous systems is proposed to assist operators in understanding the state of unmanned vehicles under control. A sensing-optimization/verification-action (SOVA) model, similar to the perception-cognition-action human informational processing model,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2015-09, Vol.59 (1), p.279-283
Hauptverfasser: Hutchins, Andrew R., Cummings, M. L., Draper, Mark, Hughes, Thomas
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A method for determining the self-confidence of autonomous systems is proposed to assist operators in understanding the state of unmanned vehicles under control. A sensing-optimization/verification-action (SOVA) model, similar to the perception-cognition-action human informational processing model, has been developed to illustrate how autonomous systems interact with their environment and how areas of uncertainty affect system performance. LIDAR and GPS were examined for scenarios where sensed surroundings could be inaccurate, while discrete and probabilistic algorithms were surveyed for situations that could result in path planning uncertainty. Likert scales were developed to represent sensor and algorithm uncertainties, and these scales laid the foundation for the proposed Trust Annunciator Panel (TAP) consisting of a series of uncertainty level indicators (ULIs). The TAP emphasizes the critical role of human judgment and oversight, especially when autonomous systems operate in clustered or dynamic environments.
ISSN:1541-9312
1071-1813
2169-5067
DOI:10.1177/1541931215591057