Is My Driver Observation Model Overconfident? Input-Guided Calibration Networks for Reliable and Interpretable Confidence Estimates

Driver observation models are rarely deployed under perfect conditions. In practice, illumination, camera placement and type differ from the ones present during training and unforeseen behaviours may occur at any time. While observing the human behind the steering wheel leads to more intuitive human...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2022-12, Vol.23 (12), p.25271-25286
Hauptverfasser: Roitberg, Alina, Peng, Kunyu, Schneider, David, Yang, Kailun, Koulakis, Marios, Martinez, Manuel, Stiefelhagen, Rainer
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 25286
container_issue 12
container_start_page 25271
container_title IEEE transactions on intelligent transportation systems
container_volume 23
creator Roitberg, Alina
Peng, Kunyu
Schneider, David
Yang, Kailun
Koulakis, Marios
Martinez, Manuel
Stiefelhagen, Rainer
description Driver observation models are rarely deployed under perfect conditions. In practice, illumination, camera placement and type differ from the ones present during training and unforeseen behaviours may occur at any time. While observing the human behind the steering wheel leads to more intuitive human-vehicle-interaction and safer driving, it requires recognition algorithms which do not only predict the correct driver state, but also determine their prediction quality through realistic and interpretable confidence measures. Reliable uncertainty estimates are crucial for building trust and are a serious obstacle for deploying activity recognition networks in real driving systems. In this work, we for the first time examine how well the confidence values of modern driver observation models indeed match the probability of the correct outcome and show that raw neural network-based approaches tend to significantly overestimate their prediction quality. To correct this misalignment between the confidence values and the actual uncertainty, we consider two strategies. First, we enhance two activity recognition models often used for driver observation with temperature scaling - an off-the-shelf method for confidence calibration in image classification. Then, we introduce Calibrated Action Recognition with Input Guidance (CARING) - a novel approach leveraging an additional neural network to learn scaling the confidences depending on the video representation. Extensive experiments on the Drive&Act dataset demonstrate that both strategies drastically improve the quality of model confidences, while our CARING model outperforms both, the original architectures and their temperature scaling enhancement, leading to best uncertainty estimates.
doi_str_mv 10.1109/TITS.2022.3196410
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9857619</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9857619</ieee_id><sourcerecordid>2747612040</sourcerecordid><originalsourceid>FETCH-LOGICAL-c223t-eac5c472e7ee63ead26c8b1b085b0a16d10a94257b8ad9966d96b4284675e2243</originalsourceid><addsrcrecordid>eNo9kFFLwzAUhYsoOKc_QHwJ-NyZpEnaPInMOQubA53PJW1uobO2NUkne_aPm9rh0809fOfccILgmuAZIVjebdPt24xiSmcRkYIRfBJMCOdJiDERp8ObslBijs-DC2t3XmWckEnwk1q0PqBHU-3BoE1uweyVq9oGrVsNNdp4uWibstLQuHuUNl3vwmXvV43mqq5yM9Iv4L5b82FR2Rr0CnWl8hqQarS3ODCdAfenzI9ZBaCFddWncmAvg7NS1RaujnMavD8ttvPncLVZpvOHVVhQGrkQVMELFlOIAUQESlNRJDnJccJzrIjQBCvJKI_zRGkphdBS5IwmTMQcKGXRNLgdczvTfvVgXbZre9P4kxmNWSwIxQx7ioxUYVprDZRZZ_w_zSEjOBu6zoaus6Hr7Ni199yMngoA_nmZcB8qo1_idXw1</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2747612040</pqid></control><display><type>article</type><title>Is My Driver Observation Model Overconfident? Input-Guided Calibration Networks for Reliable and Interpretable Confidence Estimates</title><source>IEEE Electronic Library (IEL)</source><creator>Roitberg, Alina ; Peng, Kunyu ; Schneider, David ; Yang, Kailun ; Koulakis, Marios ; Martinez, Manuel ; Stiefelhagen, Rainer</creator><creatorcontrib>Roitberg, Alina ; Peng, Kunyu ; Schneider, David ; Yang, Kailun ; Koulakis, Marios ; Martinez, Manuel ; Stiefelhagen, Rainer</creatorcontrib><description>Driver observation models are rarely deployed under perfect conditions. In practice, illumination, camera placement and type differ from the ones present during training and unforeseen behaviours may occur at any time. While observing the human behind the steering wheel leads to more intuitive human-vehicle-interaction and safer driving, it requires recognition algorithms which do not only predict the correct driver state, but also determine their prediction quality through realistic and interpretable confidence measures. Reliable uncertainty estimates are crucial for building trust and are a serious obstacle for deploying activity recognition networks in real driving systems. In this work, we for the first time examine how well the confidence values of modern driver observation models indeed match the probability of the correct outcome and show that raw neural network-based approaches tend to significantly overestimate their prediction quality. To correct this misalignment between the confidence values and the actual uncertainty, we consider two strategies. First, we enhance two activity recognition models often used for driver observation with temperature scaling - an off-the-shelf method for confidence calibration in image classification. Then, we introduce Calibrated Action Recognition with Input Guidance (CARING) - a novel approach leveraging an additional neural network to learn scaling the confidences depending on the video representation. Extensive experiments on the Drive&amp;Act dataset demonstrate that both strategies drastically improve the quality of model confidences, while our CARING model outperforms both, the original architectures and their temperature scaling enhancement, leading to best uncertainty estimates.</description><identifier>ISSN: 1524-9050</identifier><identifier>EISSN: 1558-0016</identifier><identifier>DOI: 10.1109/TITS.2022.3196410</identifier><identifier>CODEN: ITISFG</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Activity recognition ; Algorithms ; Calibration ; Driver activity recognition ; Estimates ; Image classification ; Misalignment ; model confidence reliability ; Neural networks ; Object recognition ; Predictive models ; Reliability ; Scaling ; Steering wheels ; Uncertainty ; uncertainty in deep learning ; Vehicles</subject><ispartof>IEEE transactions on intelligent transportation systems, 2022-12, Vol.23 (12), p.25271-25286</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c223t-eac5c472e7ee63ead26c8b1b085b0a16d10a94257b8ad9966d96b4284675e2243</citedby><cites>FETCH-LOGICAL-c223t-eac5c472e7ee63ead26c8b1b085b0a16d10a94257b8ad9966d96b4284675e2243</cites><orcidid>0000-0002-1090-667X ; 0000-0002-9324-4447 ; 0000-0002-5419-9292 ; 0000-0001-6020-7618 ; 0000-0003-4724-9164</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9857619$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9857619$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Roitberg, Alina</creatorcontrib><creatorcontrib>Peng, Kunyu</creatorcontrib><creatorcontrib>Schneider, David</creatorcontrib><creatorcontrib>Yang, Kailun</creatorcontrib><creatorcontrib>Koulakis, Marios</creatorcontrib><creatorcontrib>Martinez, Manuel</creatorcontrib><creatorcontrib>Stiefelhagen, Rainer</creatorcontrib><title>Is My Driver Observation Model Overconfident? Input-Guided Calibration Networks for Reliable and Interpretable Confidence Estimates</title><title>IEEE transactions on intelligent transportation systems</title><addtitle>TITS</addtitle><description>Driver observation models are rarely deployed under perfect conditions. In practice, illumination, camera placement and type differ from the ones present during training and unforeseen behaviours may occur at any time. While observing the human behind the steering wheel leads to more intuitive human-vehicle-interaction and safer driving, it requires recognition algorithms which do not only predict the correct driver state, but also determine their prediction quality through realistic and interpretable confidence measures. Reliable uncertainty estimates are crucial for building trust and are a serious obstacle for deploying activity recognition networks in real driving systems. In this work, we for the first time examine how well the confidence values of modern driver observation models indeed match the probability of the correct outcome and show that raw neural network-based approaches tend to significantly overestimate their prediction quality. To correct this misalignment between the confidence values and the actual uncertainty, we consider two strategies. First, we enhance two activity recognition models often used for driver observation with temperature scaling - an off-the-shelf method for confidence calibration in image classification. Then, we introduce Calibrated Action Recognition with Input Guidance (CARING) - a novel approach leveraging an additional neural network to learn scaling the confidences depending on the video representation. Extensive experiments on the Drive&amp;Act dataset demonstrate that both strategies drastically improve the quality of model confidences, while our CARING model outperforms both, the original architectures and their temperature scaling enhancement, leading to best uncertainty estimates.</description><subject>Activity recognition</subject><subject>Algorithms</subject><subject>Calibration</subject><subject>Driver activity recognition</subject><subject>Estimates</subject><subject>Image classification</subject><subject>Misalignment</subject><subject>model confidence reliability</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Predictive models</subject><subject>Reliability</subject><subject>Scaling</subject><subject>Steering wheels</subject><subject>Uncertainty</subject><subject>uncertainty in deep learning</subject><subject>Vehicles</subject><issn>1524-9050</issn><issn>1558-0016</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kFFLwzAUhYsoOKc_QHwJ-NyZpEnaPInMOQubA53PJW1uobO2NUkne_aPm9rh0809fOfccILgmuAZIVjebdPt24xiSmcRkYIRfBJMCOdJiDERp8ObslBijs-DC2t3XmWckEnwk1q0PqBHU-3BoE1uweyVq9oGrVsNNdp4uWibstLQuHuUNl3vwmXvV43mqq5yM9Iv4L5b82FR2Rr0CnWl8hqQarS3ODCdAfenzI9ZBaCFddWncmAvg7NS1RaujnMavD8ttvPncLVZpvOHVVhQGrkQVMELFlOIAUQESlNRJDnJccJzrIjQBCvJKI_zRGkphdBS5IwmTMQcKGXRNLgdczvTfvVgXbZre9P4kxmNWSwIxQx7ioxUYVprDZRZZ_w_zSEjOBu6zoaus6Hr7Ni199yMngoA_nmZcB8qo1_idXw1</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Roitberg, Alina</creator><creator>Peng, Kunyu</creator><creator>Schneider, David</creator><creator>Yang, Kailun</creator><creator>Koulakis, Marios</creator><creator>Martinez, Manuel</creator><creator>Stiefelhagen, Rainer</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-1090-667X</orcidid><orcidid>https://orcid.org/0000-0002-9324-4447</orcidid><orcidid>https://orcid.org/0000-0002-5419-9292</orcidid><orcidid>https://orcid.org/0000-0001-6020-7618</orcidid><orcidid>https://orcid.org/0000-0003-4724-9164</orcidid></search><sort><creationdate>20221201</creationdate><title>Is My Driver Observation Model Overconfident? Input-Guided Calibration Networks for Reliable and Interpretable Confidence Estimates</title><author>Roitberg, Alina ; Peng, Kunyu ; Schneider, David ; Yang, Kailun ; Koulakis, Marios ; Martinez, Manuel ; Stiefelhagen, Rainer</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c223t-eac5c472e7ee63ead26c8b1b085b0a16d10a94257b8ad9966d96b4284675e2243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Activity recognition</topic><topic>Algorithms</topic><topic>Calibration</topic><topic>Driver activity recognition</topic><topic>Estimates</topic><topic>Image classification</topic><topic>Misalignment</topic><topic>model confidence reliability</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Predictive models</topic><topic>Reliability</topic><topic>Scaling</topic><topic>Steering wheels</topic><topic>Uncertainty</topic><topic>uncertainty in deep learning</topic><topic>Vehicles</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Roitberg, Alina</creatorcontrib><creatorcontrib>Peng, Kunyu</creatorcontrib><creatorcontrib>Schneider, David</creatorcontrib><creatorcontrib>Yang, Kailun</creatorcontrib><creatorcontrib>Koulakis, Marios</creatorcontrib><creatorcontrib>Martinez, Manuel</creatorcontrib><creatorcontrib>Stiefelhagen, Rainer</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on intelligent transportation systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Roitberg, Alina</au><au>Peng, Kunyu</au><au>Schneider, David</au><au>Yang, Kailun</au><au>Koulakis, Marios</au><au>Martinez, Manuel</au><au>Stiefelhagen, Rainer</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Is My Driver Observation Model Overconfident? Input-Guided Calibration Networks for Reliable and Interpretable Confidence Estimates</atitle><jtitle>IEEE transactions on intelligent transportation systems</jtitle><stitle>TITS</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>23</volume><issue>12</issue><spage>25271</spage><epage>25286</epage><pages>25271-25286</pages><issn>1524-9050</issn><eissn>1558-0016</eissn><coden>ITISFG</coden><abstract>Driver observation models are rarely deployed under perfect conditions. In practice, illumination, camera placement and type differ from the ones present during training and unforeseen behaviours may occur at any time. While observing the human behind the steering wheel leads to more intuitive human-vehicle-interaction and safer driving, it requires recognition algorithms which do not only predict the correct driver state, but also determine their prediction quality through realistic and interpretable confidence measures. Reliable uncertainty estimates are crucial for building trust and are a serious obstacle for deploying activity recognition networks in real driving systems. In this work, we for the first time examine how well the confidence values of modern driver observation models indeed match the probability of the correct outcome and show that raw neural network-based approaches tend to significantly overestimate their prediction quality. To correct this misalignment between the confidence values and the actual uncertainty, we consider two strategies. First, we enhance two activity recognition models often used for driver observation with temperature scaling - an off-the-shelf method for confidence calibration in image classification. Then, we introduce Calibrated Action Recognition with Input Guidance (CARING) - a novel approach leveraging an additional neural network to learn scaling the confidences depending on the video representation. Extensive experiments on the Drive&amp;Act dataset demonstrate that both strategies drastically improve the quality of model confidences, while our CARING model outperforms both, the original architectures and their temperature scaling enhancement, leading to best uncertainty estimates.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TITS.2022.3196410</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-1090-667X</orcidid><orcidid>https://orcid.org/0000-0002-9324-4447</orcidid><orcidid>https://orcid.org/0000-0002-5419-9292</orcidid><orcidid>https://orcid.org/0000-0001-6020-7618</orcidid><orcidid>https://orcid.org/0000-0003-4724-9164</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1524-9050
ispartof IEEE transactions on intelligent transportation systems, 2022-12, Vol.23 (12), p.25271-25286
issn 1524-9050
1558-0016
language eng
recordid cdi_ieee_primary_9857619
source IEEE Electronic Library (IEL)
subjects Activity recognition
Algorithms
Calibration
Driver activity recognition
Estimates
Image classification
Misalignment
model confidence reliability
Neural networks
Object recognition
Predictive models
Reliability
Scaling
Steering wheels
Uncertainty
uncertainty in deep learning
Vehicles
title Is My Driver Observation Model Overconfident? Input-Guided Calibration Networks for Reliable and Interpretable Confidence Estimates
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T11%3A24%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Is%20My%20Driver%20Observation%20Model%20Overconfident?%20Input-Guided%20Calibration%20Networks%20for%20Reliable%20and%20Interpretable%20Confidence%20Estimates&rft.jtitle=IEEE%20transactions%20on%20intelligent%20transportation%20systems&rft.au=Roitberg,%20Alina&rft.date=2022-12-01&rft.volume=23&rft.issue=12&rft.spage=25271&rft.epage=25286&rft.pages=25271-25286&rft.issn=1524-9050&rft.eissn=1558-0016&rft.coden=ITISFG&rft_id=info:doi/10.1109/TITS.2022.3196410&rft_dat=%3Cproquest_RIE%3E2747612040%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2747612040&rft_id=info:pmid/&rft_ieee_id=9857619&rfr_iscdi=true