Towards Structured Evaluation of Deep Neural Network Supervisors
Deep Neural Networks (DNN) have improved the quality of several non-safety related products in the past years. However, before DNNs should be deployed to safety-critical applications, their robustness needs to be systematically analyzed. A common challenge for DNNs occurs when input is dissimilar to...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Henriksson, Jens Berger, Christian Borg, Markus Tornberg, Lars Englund, Cristofer Sathyamoorthy, Sankar Raman Ursing, Stig |
description | Deep Neural Networks (DNN) have improved the quality of several non-safety
related products in the past years. However, before DNNs should be deployed to
safety-critical applications, their robustness needs to be systematically
analyzed. A common challenge for DNNs occurs when input is dissimilar to the
training set, which might lead to high confidence predictions despite proper
knowledge of the input. Several previous studies have proposed to complement
DNNs with a supervisor that detects when inputs are outside the scope of the
network. Most of these supervisors, however, are developed and tested for a
selected scenario using a specific performance metric. In this work, we
emphasize the need to assess and compare the performance of supervisors in a
structured way. We present a framework constituted by four datasets organized
in six test cases combined with seven evaluation metrics. The test cases
provide varying complexity and include data from publicly available sources as
well as a novel dataset consisting of images from simulated driving scenarios.
The latter we plan to make publicly available. Our framework can be used to
support DNN supervisor evaluation, which in turn could be used to motive
development, validation, and deployment of DNNs in safety-critical
applications. |
doi_str_mv | 10.48550/arxiv.1903.01263 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1903_01263</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1903_01263</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-bc75243938aa492701971473550393c9da26252ab399aa3a7ee2e224af5478a93</originalsourceid><addsrcrecordid>eNotj71ugzAURr1kqJI8QKf6BaC2r43xlipNf6SoHcKObsBIqKRGF0zSty9NMx3pGz6dw9i9FKnOjRGPSJd2SqUTkAqpMrhjmyKckeqBH0aK1RjJ13w3YRdxbMM3Dw1_9r7nHz4SdjPGc6Avfoi9p6kdAg0rtmiwG_z6xiUrXnbF9i3Zf76-b5_2CWYWkmNljdLgIEfUTlkhnZXawiw1j5WrUWXKKDyCc4iA1nvlldLYGG1zdLBkD_-314Syp_aE9FP-pZTXFPgF-vNDCA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards Structured Evaluation of Deep Neural Network Supervisors</title><source>arXiv.org</source><creator>Henriksson, Jens ; Berger, Christian ; Borg, Markus ; Tornberg, Lars ; Englund, Cristofer ; Sathyamoorthy, Sankar Raman ; Ursing, Stig</creator><creatorcontrib>Henriksson, Jens ; Berger, Christian ; Borg, Markus ; Tornberg, Lars ; Englund, Cristofer ; Sathyamoorthy, Sankar Raman ; Ursing, Stig</creatorcontrib><description>Deep Neural Networks (DNN) have improved the quality of several non-safety
related products in the past years. However, before DNNs should be deployed to
safety-critical applications, their robustness needs to be systematically
analyzed. A common challenge for DNNs occurs when input is dissimilar to the
training set, which might lead to high confidence predictions despite proper
knowledge of the input. Several previous studies have proposed to complement
DNNs with a supervisor that detects when inputs are outside the scope of the
network. Most of these supervisors, however, are developed and tested for a
selected scenario using a specific performance metric. In this work, we
emphasize the need to assess and compare the performance of supervisors in a
structured way. We present a framework constituted by four datasets organized
in six test cases combined with seven evaluation metrics. The test cases
provide varying complexity and include data from publicly available sources as
well as a novel dataset consisting of images from simulated driving scenarios.
The latter we plan to make publicly available. Our framework can be used to
support DNN supervisor evaluation, which in turn could be used to motive
development, validation, and deployment of DNNs in safety-critical
applications.</description><identifier>DOI: 10.48550/arxiv.1903.01263</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Software Engineering</subject><creationdate>2019-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1903.01263$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1903.01263$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Henriksson, Jens</creatorcontrib><creatorcontrib>Berger, Christian</creatorcontrib><creatorcontrib>Borg, Markus</creatorcontrib><creatorcontrib>Tornberg, Lars</creatorcontrib><creatorcontrib>Englund, Cristofer</creatorcontrib><creatorcontrib>Sathyamoorthy, Sankar Raman</creatorcontrib><creatorcontrib>Ursing, Stig</creatorcontrib><title>Towards Structured Evaluation of Deep Neural Network Supervisors</title><description>Deep Neural Networks (DNN) have improved the quality of several non-safety
related products in the past years. However, before DNNs should be deployed to
safety-critical applications, their robustness needs to be systematically
analyzed. A common challenge for DNNs occurs when input is dissimilar to the
training set, which might lead to high confidence predictions despite proper
knowledge of the input. Several previous studies have proposed to complement
DNNs with a supervisor that detects when inputs are outside the scope of the
network. Most of these supervisors, however, are developed and tested for a
selected scenario using a specific performance metric. In this work, we
emphasize the need to assess and compare the performance of supervisors in a
structured way. We present a framework constituted by four datasets organized
in six test cases combined with seven evaluation metrics. The test cases
provide varying complexity and include data from publicly available sources as
well as a novel dataset consisting of images from simulated driving scenarios.
The latter we plan to make publicly available. Our framework can be used to
support DNN supervisor evaluation, which in turn could be used to motive
development, validation, and deployment of DNNs in safety-critical
applications.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Software Engineering</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71ugzAURr1kqJI8QKf6BaC2r43xlipNf6SoHcKObsBIqKRGF0zSty9NMx3pGz6dw9i9FKnOjRGPSJd2SqUTkAqpMrhjmyKckeqBH0aK1RjJ13w3YRdxbMM3Dw1_9r7nHz4SdjPGc6Avfoi9p6kdAg0rtmiwG_z6xiUrXnbF9i3Zf76-b5_2CWYWkmNljdLgIEfUTlkhnZXawiw1j5WrUWXKKDyCc4iA1nvlldLYGG1zdLBkD_-314Syp_aE9FP-pZTXFPgF-vNDCA</recordid><startdate>20190304</startdate><enddate>20190304</enddate><creator>Henriksson, Jens</creator><creator>Berger, Christian</creator><creator>Borg, Markus</creator><creator>Tornberg, Lars</creator><creator>Englund, Cristofer</creator><creator>Sathyamoorthy, Sankar Raman</creator><creator>Ursing, Stig</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190304</creationdate><title>Towards Structured Evaluation of Deep Neural Network Supervisors</title><author>Henriksson, Jens ; Berger, Christian ; Borg, Markus ; Tornberg, Lars ; Englund, Cristofer ; Sathyamoorthy, Sankar Raman ; Ursing, Stig</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-bc75243938aa492701971473550393c9da26252ab399aa3a7ee2e224af5478a93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Software Engineering</topic><toplevel>online_resources</toplevel><creatorcontrib>Henriksson, Jens</creatorcontrib><creatorcontrib>Berger, Christian</creatorcontrib><creatorcontrib>Borg, Markus</creatorcontrib><creatorcontrib>Tornberg, Lars</creatorcontrib><creatorcontrib>Englund, Cristofer</creatorcontrib><creatorcontrib>Sathyamoorthy, Sankar Raman</creatorcontrib><creatorcontrib>Ursing, Stig</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Henriksson, Jens</au><au>Berger, Christian</au><au>Borg, Markus</au><au>Tornberg, Lars</au><au>Englund, Cristofer</au><au>Sathyamoorthy, Sankar Raman</au><au>Ursing, Stig</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Structured Evaluation of Deep Neural Network Supervisors</atitle><date>2019-03-04</date><risdate>2019</risdate><abstract>Deep Neural Networks (DNN) have improved the quality of several non-safety
related products in the past years. However, before DNNs should be deployed to
safety-critical applications, their robustness needs to be systematically
analyzed. A common challenge for DNNs occurs when input is dissimilar to the
training set, which might lead to high confidence predictions despite proper
knowledge of the input. Several previous studies have proposed to complement
DNNs with a supervisor that detects when inputs are outside the scope of the
network. Most of these supervisors, however, are developed and tested for a
selected scenario using a specific performance metric. In this work, we
emphasize the need to assess and compare the performance of supervisors in a
structured way. We present a framework constituted by four datasets organized
in six test cases combined with seven evaluation metrics. The test cases
provide varying complexity and include data from publicly available sources as
well as a novel dataset consisting of images from simulated driving scenarios.
The latter we plan to make publicly available. Our framework can be used to
support DNN supervisor evaluation, which in turn could be used to motive
development, validation, and deployment of DNNs in safety-critical
applications.</abstract><doi>10.48550/arxiv.1903.01263</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1903.01263 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1903_01263 |
source | arXiv.org |
subjects | Computer Science - Learning Computer Science - Software Engineering |
title | Towards Structured Evaluation of Deep Neural Network Supervisors |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T01%3A29%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Structured%20Evaluation%20of%20Deep%20Neural%20Network%20Supervisors&rft.au=Henriksson,%20Jens&rft.date=2019-03-04&rft_id=info:doi/10.48550/arxiv.1903.01263&rft_dat=%3Carxiv_GOX%3E1903_01263%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |