Understanding the Impact of Edge Cases from Occluded Pedestrians for ML Systems

Machine learning (ML)-enabled approaches are considered a substantial support technique of detection and classification of obstacles of traffic participants in self-driving vehicles. Major breakthroughs have been demonstrated the past few years, even covering complete end-to-end data processing chai...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-04
Hauptverfasser: Henriksson, Jens, Berger, Christian, Ursing, Stig
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Henriksson, Jens
Berger, Christian
Ursing, Stig
description Machine learning (ML)-enabled approaches are considered a substantial support technique of detection and classification of obstacles of traffic participants in self-driving vehicles. Major breakthroughs have been demonstrated the past few years, even covering complete end-to-end data processing chain from sensory inputs through perception and planning to vehicle control of acceleration, breaking and steering. YOLO (you-only-look-once) is a state-of-the-art perception neural network (NN) architecture providing object detection and classification through bounding box estimations on camera images. As the NN is trained on well annotated images, in this paper we study the variations of confidence levels from the NN when tested on hand-crafted occlusion added to a test set. We compare regular pedestrian detection to upper and lower body detection. Our findings show that the two NN using only partial information perform similarly well like the NN for the full body when the full body NN's performance is 0.75 or better. Furthermore and as expected, the network, which is only trained on the lower half body is least prone to disturbances from occlusions of the upper half and vice versa.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2655917087</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2655917087</sourcerecordid><originalsourceid>FETCH-proquest_journals_26559170873</originalsourceid><addsrcrecordid>eNqNi8EKgkAUAJcgSMp_eNBZ0LVVO4tRUBRUZ1ncpym6a_vWQ3-fhz6g0xxmZsE8HsdRkO04XzGfqAvDkCcpFyL22PWpFVpyUqtWN-BeCKdhlJUDU0OhGoRcEhLU1gxwrap-UqjghgrJ2Vbq2RgLlzPcP-RwoA1b1rIn9H9cs-2heOTHYLTmPc1T2ZnJ6lmVPBFiH6Vhlsb_VV_Hzj33</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2655917087</pqid></control><display><type>article</type><title>Understanding the Impact of Edge Cases from Occluded Pedestrians for ML Systems</title><source>Free E- Journals</source><creator>Henriksson, Jens ; Berger, Christian ; Ursing, Stig</creator><creatorcontrib>Henriksson, Jens ; Berger, Christian ; Ursing, Stig</creatorcontrib><description>Machine learning (ML)-enabled approaches are considered a substantial support technique of detection and classification of obstacles of traffic participants in self-driving vehicles. Major breakthroughs have been demonstrated the past few years, even covering complete end-to-end data processing chain from sensory inputs through perception and planning to vehicle control of acceleration, breaking and steering. YOLO (you-only-look-once) is a state-of-the-art perception neural network (NN) architecture providing object detection and classification through bounding box estimations on camera images. As the NN is trained on well annotated images, in this paper we study the variations of confidence levels from the NN when tested on hand-crafted occlusion added to a test set. We compare regular pedestrian detection to upper and lower body detection. Our findings show that the two NN using only partial information perform similarly well like the NN for the full body when the full body NN's performance is 0.75 or better. Furthermore and as expected, the network, which is only trained on the lower half body is least prone to disturbances from occlusions of the upper half and vice versa.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Autonomous cars ; Confidence intervals ; Data processing ; Image classification ; Machine learning ; Neural networks ; Object recognition ; Occlusion ; Pedestrians ; Perception ; Steering</subject><ispartof>arXiv.org, 2022-04</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Henriksson, Jens</creatorcontrib><creatorcontrib>Berger, Christian</creatorcontrib><creatorcontrib>Ursing, Stig</creatorcontrib><title>Understanding the Impact of Edge Cases from Occluded Pedestrians for ML Systems</title><title>arXiv.org</title><description>Machine learning (ML)-enabled approaches are considered a substantial support technique of detection and classification of obstacles of traffic participants in self-driving vehicles. Major breakthroughs have been demonstrated the past few years, even covering complete end-to-end data processing chain from sensory inputs through perception and planning to vehicle control of acceleration, breaking and steering. YOLO (you-only-look-once) is a state-of-the-art perception neural network (NN) architecture providing object detection and classification through bounding box estimations on camera images. As the NN is trained on well annotated images, in this paper we study the variations of confidence levels from the NN when tested on hand-crafted occlusion added to a test set. We compare regular pedestrian detection to upper and lower body detection. Our findings show that the two NN using only partial information perform similarly well like the NN for the full body when the full body NN's performance is 0.75 or better. Furthermore and as expected, the network, which is only trained on the lower half body is least prone to disturbances from occlusions of the upper half and vice versa.</description><subject>Autonomous cars</subject><subject>Confidence intervals</subject><subject>Data processing</subject><subject>Image classification</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Occlusion</subject><subject>Pedestrians</subject><subject>Perception</subject><subject>Steering</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNi8EKgkAUAJcgSMp_eNBZ0LVVO4tRUBRUZ1ncpym6a_vWQ3-fhz6g0xxmZsE8HsdRkO04XzGfqAvDkCcpFyL22PWpFVpyUqtWN-BeCKdhlJUDU0OhGoRcEhLU1gxwrap-UqjghgrJ2Vbq2RgLlzPcP-RwoA1b1rIn9H9cs-2heOTHYLTmPc1T2ZnJ6lmVPBFiH6Vhlsb_VV_Hzj33</recordid><startdate>20220426</startdate><enddate>20220426</enddate><creator>Henriksson, Jens</creator><creator>Berger, Christian</creator><creator>Ursing, Stig</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220426</creationdate><title>Understanding the Impact of Edge Cases from Occluded Pedestrians for ML Systems</title><author>Henriksson, Jens ; Berger, Christian ; Ursing, Stig</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26559170873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Autonomous cars</topic><topic>Confidence intervals</topic><topic>Data processing</topic><topic>Image classification</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Occlusion</topic><topic>Pedestrians</topic><topic>Perception</topic><topic>Steering</topic><toplevel>online_resources</toplevel><creatorcontrib>Henriksson, Jens</creatorcontrib><creatorcontrib>Berger, Christian</creatorcontrib><creatorcontrib>Ursing, Stig</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Henriksson, Jens</au><au>Berger, Christian</au><au>Ursing, Stig</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Understanding the Impact of Edge Cases from Occluded Pedestrians for ML Systems</atitle><jtitle>arXiv.org</jtitle><date>2022-04-26</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Machine learning (ML)-enabled approaches are considered a substantial support technique of detection and classification of obstacles of traffic participants in self-driving vehicles. Major breakthroughs have been demonstrated the past few years, even covering complete end-to-end data processing chain from sensory inputs through perception and planning to vehicle control of acceleration, breaking and steering. YOLO (you-only-look-once) is a state-of-the-art perception neural network (NN) architecture providing object detection and classification through bounding box estimations on camera images. As the NN is trained on well annotated images, in this paper we study the variations of confidence levels from the NN when tested on hand-crafted occlusion added to a test set. We compare regular pedestrian detection to upper and lower body detection. Our findings show that the two NN using only partial information perform similarly well like the NN for the full body when the full body NN's performance is 0.75 or better. Furthermore and as expected, the network, which is only trained on the lower half body is least prone to disturbances from occlusions of the upper half and vice versa.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2655917087
source Free E- Journals
subjects Autonomous cars
Confidence intervals
Data processing
Image classification
Machine learning
Neural networks
Object recognition
Occlusion
Pedestrians
Perception
Steering
title Understanding the Impact of Edge Cases from Occluded Pedestrians for ML Systems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T20%3A32%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Understanding%20the%20Impact%20of%20Edge%20Cases%20from%20Occluded%20Pedestrians%20for%20ML%20Systems&rft.jtitle=arXiv.org&rft.au=Henriksson,%20Jens&rft.date=2022-04-26&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2655917087%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2655917087&rft_id=info:pmid/&rfr_iscdi=true