Towards Informing an Intuitive Mission Planning Interface for Autonomous Multi-Asset Teams via Image Descriptions

Establishing a basis for certification of autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR). The Human-Machine Interface (HMI) team is working to capture and utilize the multitude of ways in wh...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Vie, Lisa R. Le, Last, Mary Carolyn, Barrows, Bryan B., Allen, B. Danette
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Vie, Lisa R. Le
Last, Mary Carolyn
Barrows, Bryan B.
Allen, B. Danette
description Establishing a basis for certification of autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR). The Human-Machine Interface (HMI) team is working to capture and utilize the multitude of ways in which humans are already comfortable communicating mission goals and translate that into an intuitive mission planning interface. Several input/output modalities (speech/audio, typing/text, touch, and gesture) are being considered and investigated in the context human-machine teaming for the ATTRACTOR design reference mission (DRM) of Search and Rescue or (more generally) intelligence, surveillance, and reconnaissance (ISR). The first of these investigations, the Human Informed Natural-language GANs Evaluation (HINGE) data collection effort, is aimed at building an image description database to train a Generative Adversarial Network (GAN). In addition to building an image description database, the HMI team was interested if, and how, modality (spoken vs. written) affects different aspects of the image description given. The results will be analyzed to better inform the designing of an interface for mission planning.
format Conference Proceeding
fullrecord <record><control><sourceid>nasa_CYI</sourceid><recordid>TN_cdi_nasa_ntrs_20200002587</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>20200002587</sourcerecordid><originalsourceid>FETCH-nasa_ntrs_202000025873</originalsourceid><addsrcrecordid>eNqFjL0KwkAQhNNYiPoGFvsCgRARbYM_mCJgkT4scRMWcnt6uxdf3xPsneZj-IZZZq_WvzE8FGoZfHAsI6CkYpGNZ4KGVdkL3CcU-dqkKAzYE6Q9VNG8eOejQhMn47xSJYOW0CnMjFA7HAnOpH3gp6UnXWeLASelzY-rbHu9tKdbLqjYiQXtyqIsUsr98bD7oz_QKD9u</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Towards Informing an Intuitive Mission Planning Interface for Autonomous Multi-Asset Teams via Image Descriptions</title><source>NASA Technical Reports Server</source><creator>Vie, Lisa R. Le ; Last, Mary Carolyn ; Barrows, Bryan B. ; Allen, B. Danette</creator><creatorcontrib>Vie, Lisa R. Le ; Last, Mary Carolyn ; Barrows, Bryan B. ; Allen, B. Danette</creatorcontrib><description>Establishing a basis for certification of autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR). The Human-Machine Interface (HMI) team is working to capture and utilize the multitude of ways in which humans are already comfortable communicating mission goals and translate that into an intuitive mission planning interface. Several input/output modalities (speech/audio, typing/text, touch, and gesture) are being considered and investigated in the context human-machine teaming for the ATTRACTOR design reference mission (DRM) of Search and Rescue or (more generally) intelligence, surveillance, and reconnaissance (ISR). The first of these investigations, the Human Informed Natural-language GANs Evaluation (HINGE) data collection effort, is aimed at building an image description database to train a Generative Adversarial Network (GAN). In addition to building an image description database, the HMI team was interested if, and how, modality (spoken vs. written) affects different aspects of the image description given. The results will be analyzed to better inform the designing of an interface for mission planning.</description><language>eng</language><publisher>Langley Research Center</publisher><subject>Cybernetics, Artificial Intelligence And Robotics</subject><creationdate>2018</creationdate><rights>Copyright Determination: PUBLIC_USE_PERMITTED</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>309,780,800</link.rule.ids><linktorsrc>$$Uhttps://ntrs.nasa.gov/citations/20200002587$$EView_record_in_NASA$$FView_record_in_$$GNASA$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Vie, Lisa R. Le</creatorcontrib><creatorcontrib>Last, Mary Carolyn</creatorcontrib><creatorcontrib>Barrows, Bryan B.</creatorcontrib><creatorcontrib>Allen, B. Danette</creatorcontrib><title>Towards Informing an Intuitive Mission Planning Interface for Autonomous Multi-Asset Teams via Image Descriptions</title><description>Establishing a basis for certification of autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR). The Human-Machine Interface (HMI) team is working to capture and utilize the multitude of ways in which humans are already comfortable communicating mission goals and translate that into an intuitive mission planning interface. Several input/output modalities (speech/audio, typing/text, touch, and gesture) are being considered and investigated in the context human-machine teaming for the ATTRACTOR design reference mission (DRM) of Search and Rescue or (more generally) intelligence, surveillance, and reconnaissance (ISR). The first of these investigations, the Human Informed Natural-language GANs Evaluation (HINGE) data collection effort, is aimed at building an image description database to train a Generative Adversarial Network (GAN). In addition to building an image description database, the HMI team was interested if, and how, modality (spoken vs. written) affects different aspects of the image description given. The results will be analyzed to better inform the designing of an interface for mission planning.</description><subject>Cybernetics, Artificial Intelligence And Robotics</subject><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2018</creationdate><recordtype>conference_proceeding</recordtype><sourceid>CYI</sourceid><recordid>eNqFjL0KwkAQhNNYiPoGFvsCgRARbYM_mCJgkT4scRMWcnt6uxdf3xPsneZj-IZZZq_WvzE8FGoZfHAsI6CkYpGNZ4KGVdkL3CcU-dqkKAzYE6Q9VNG8eOejQhMn47xSJYOW0CnMjFA7HAnOpH3gp6UnXWeLASelzY-rbHu9tKdbLqjYiQXtyqIsUsr98bD7oz_QKD9u</recordid><startdate>20180625</startdate><enddate>20180625</enddate><creator>Vie, Lisa R. Le</creator><creator>Last, Mary Carolyn</creator><creator>Barrows, Bryan B.</creator><creator>Allen, B. Danette</creator><scope>CYE</scope><scope>CYI</scope></search><sort><creationdate>20180625</creationdate><title>Towards Informing an Intuitive Mission Planning Interface for Autonomous Multi-Asset Teams via Image Descriptions</title><author>Vie, Lisa R. Le ; Last, Mary Carolyn ; Barrows, Bryan B. ; Allen, B. Danette</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-nasa_ntrs_202000025873</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Cybernetics, Artificial Intelligence And Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Vie, Lisa R. Le</creatorcontrib><creatorcontrib>Last, Mary Carolyn</creatorcontrib><creatorcontrib>Barrows, Bryan B.</creatorcontrib><creatorcontrib>Allen, B. Danette</creatorcontrib><collection>NASA Scientific and Technical Information</collection><collection>NASA Technical Reports Server</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Vie, Lisa R. Le</au><au>Last, Mary Carolyn</au><au>Barrows, Bryan B.</au><au>Allen, B. Danette</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Towards Informing an Intuitive Mission Planning Interface for Autonomous Multi-Asset Teams via Image Descriptions</atitle><date>2018-06-25</date><risdate>2018</risdate><abstract>Establishing a basis for certification of autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR). The Human-Machine Interface (HMI) team is working to capture and utilize the multitude of ways in which humans are already comfortable communicating mission goals and translate that into an intuitive mission planning interface. Several input/output modalities (speech/audio, typing/text, touch, and gesture) are being considered and investigated in the context human-machine teaming for the ATTRACTOR design reference mission (DRM) of Search and Rescue or (more generally) intelligence, surveillance, and reconnaissance (ISR). The first of these investigations, the Human Informed Natural-language GANs Evaluation (HINGE) data collection effort, is aimed at building an image description database to train a Generative Adversarial Network (GAN). In addition to building an image description database, the HMI team was interested if, and how, modality (spoken vs. written) affects different aspects of the image description given. The results will be analyzed to better inform the designing of an interface for mission planning.</abstract><cop>Langley Research Center</cop><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_nasa_ntrs_20200002587
source NASA Technical Reports Server
subjects Cybernetics, Artificial Intelligence And Robotics
title Towards Informing an Intuitive Mission Planning Interface for Autonomous Multi-Asset Teams via Image Descriptions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T12%3A58%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-nasa_CYI&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Towards%20Informing%20an%20Intuitive%20Mission%20Planning%20Interface%20for%20Autonomous%20Multi-Asset%20Teams%20via%20Image%20Descriptions&rft.au=Vie,%20Lisa%20R.%20Le&rft.date=2018-06-25&rft_id=info:doi/&rft_dat=%3Cnasa_CYI%3E20200002587%3C/nasa_CYI%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true