Trust Engineering for Human-AI Teams

Human-AI teaming refers to systems in which humans and artificial intelligence (AI) agents collaborate to provide significant mission performance improvements over that which humans or AI can achieve alone. The goal is faster and more accurate decision-making by integrating the rapid data ingest, le...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2019-11, Vol.63 (1), p.322-326
Hauptverfasser: Ezer, Neta, Bruni, Sylvain, Cai, Yang, Hepenstal, Sam J., Miller, Christopher A., Schmorrow, Dylan D.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Human-AI teaming refers to systems in which humans and artificial intelligence (AI) agents collaborate to provide significant mission performance improvements over that which humans or AI can achieve alone. The goal is faster and more accurate decision-making by integrating the rapid data ingest, learning, and analyses capabilities of AI with the creative problem solving and abstraction capabilities of humans. The purpose of this panel is to discuss research directions in Trust Engineering for building appropriate bi-directional trust between humans and AI. Discussions focus on the challenges in systems that are increasingly complex and work within imperfect information environments. Panelists provide their perspectives on addressing these challenges through concepts such as dynamic relationship management, adaptive systems, co-discovery learning, and algorithmic transparency. Mission scenarios in command and control (C2), piloting, cybersecurity, and criminal intelligence analysis demonstrate the importance of bi-directional trust in human-AI teams.
ISSN:2169-5067
1071-1813
2169-5067
DOI:10.1177/1071181319631264