Discriminative out-of-distribution detection for semantic segmentation

Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input. The...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bevandić, Petra, Krešo, Ivan, Oršić, Marin, Šegvić, Siniša
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Bevandić, Petra
Krešo, Ivan
Oršić, Marin
Šegvić, Siniša
description Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input. These failures are bound to happen in most real-life applications since current visual ontologies are far from being comprehensive. We propose to address this issue by discriminative detection of OOD pixels in input data. Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task. Instead, we train a dedicated OOD model which discriminates the primary training set from a much larger "background" dataset which approximates the variety of the visual world. We perform our experiments on high resolution natural images in a dense prediction setup. We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset. We evaluate our approach on WildDash test, which is currently the only public test dataset that includes out-of-distribution images. The obtained results show that the proposed approach succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.
doi_str_mv 10.48550/arxiv.1808.07703
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1808_07703</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1808_07703</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-6faf23cd5f1cdc10f73a98dbada73572758a32120ba987728e0a4b658ca974423</originalsourceid><addsrcrecordid>eNotj8tuwjAURL3pAtF-AKvmB5z6EeeaZUWhICF1wz668aO6UpNUjkH07xsCqxnNSKM5jK2kKCtrjHjDdKVLKa2wpQAQesF2HzS6RB31mOkSiuGc-RC5pzEnas-Zhr7wIQc3uzikYgwd9pncZL670Ge8Nc_sKeLPGF4eumSn3fa02fPj1-dh837kWIPmdcSotPMmSuedFBE0rq1v0SNoAwqMRa2kEu0UAygbBFZtbazDNVSV0kv2ep-dQZrf6Timv-YG1MxA-h_2HUc6</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Discriminative out-of-distribution detection for semantic segmentation</title><source>arXiv.org</source><creator>Bevandić, Petra ; Krešo, Ivan ; Oršić, Marin ; Šegvić, Siniša</creator><creatorcontrib>Bevandić, Petra ; Krešo, Ivan ; Oršić, Marin ; Šegvić, Siniša</creatorcontrib><description>Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input. These failures are bound to happen in most real-life applications since current visual ontologies are far from being comprehensive. We propose to address this issue by discriminative detection of OOD pixels in input data. Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task. Instead, we train a dedicated OOD model which discriminates the primary training set from a much larger "background" dataset which approximates the variety of the visual world. We perform our experiments on high resolution natural images in a dense prediction setup. We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset. We evaluate our approach on WildDash test, which is currently the only public test dataset that includes out-of-distribution images. The obtained results show that the proposed approach succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.</description><identifier>DOI: 10.48550/arxiv.1808.07703</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2018-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1808.07703$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1808.07703$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bevandić, Petra</creatorcontrib><creatorcontrib>Krešo, Ivan</creatorcontrib><creatorcontrib>Oršić, Marin</creatorcontrib><creatorcontrib>Šegvić, Siniša</creatorcontrib><title>Discriminative out-of-distribution detection for semantic segmentation</title><description>Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input. These failures are bound to happen in most real-life applications since current visual ontologies are far from being comprehensive. We propose to address this issue by discriminative detection of OOD pixels in input data. Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task. Instead, we train a dedicated OOD model which discriminates the primary training set from a much larger "background" dataset which approximates the variety of the visual world. We perform our experiments on high resolution natural images in a dense prediction setup. We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset. We evaluate our approach on WildDash test, which is currently the only public test dataset that includes out-of-distribution images. The obtained results show that the proposed approach succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tuwjAURL3pAtF-AKvmB5z6EeeaZUWhICF1wz668aO6UpNUjkH07xsCqxnNSKM5jK2kKCtrjHjDdKVLKa2wpQAQesF2HzS6RB31mOkSiuGc-RC5pzEnas-Zhr7wIQc3uzikYgwd9pncZL670Ge8Nc_sKeLPGF4eumSn3fa02fPj1-dh837kWIPmdcSotPMmSuedFBE0rq1v0SNoAwqMRa2kEu0UAygbBFZtbazDNVSV0kv2ep-dQZrf6Timv-YG1MxA-h_2HUc6</recordid><startdate>20180823</startdate><enddate>20180823</enddate><creator>Bevandić, Petra</creator><creator>Krešo, Ivan</creator><creator>Oršić, Marin</creator><creator>Šegvić, Siniša</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180823</creationdate><title>Discriminative out-of-distribution detection for semantic segmentation</title><author>Bevandić, Petra ; Krešo, Ivan ; Oršić, Marin ; Šegvić, Siniša</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-6faf23cd5f1cdc10f73a98dbada73572758a32120ba987728e0a4b658ca974423</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Bevandić, Petra</creatorcontrib><creatorcontrib>Krešo, Ivan</creatorcontrib><creatorcontrib>Oršić, Marin</creatorcontrib><creatorcontrib>Šegvić, Siniša</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bevandić, Petra</au><au>Krešo, Ivan</au><au>Oršić, Marin</au><au>Šegvić, Siniša</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Discriminative out-of-distribution detection for semantic segmentation</atitle><date>2018-08-23</date><risdate>2018</risdate><abstract>Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input. These failures are bound to happen in most real-life applications since current visual ontologies are far from being comprehensive. We propose to address this issue by discriminative detection of OOD pixels in input data. Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task. Instead, we train a dedicated OOD model which discriminates the primary training set from a much larger "background" dataset which approximates the variety of the visual world. We perform our experiments on high resolution natural images in a dense prediction setup. We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset. We evaluate our approach on WildDash test, which is currently the only public test dataset that includes out-of-distribution images. The obtained results show that the proposed approach succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.</abstract><doi>10.48550/arxiv.1808.07703</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1808.07703
ispartof
issn
language eng
recordid cdi_arxiv_primary_1808_07703
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Discriminative out-of-distribution detection for semantic segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T08%3A00%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Discriminative%20out-of-distribution%20detection%20for%20semantic%20segmentation&rft.au=Bevandi%C4%87,%20Petra&rft.date=2018-08-23&rft_id=info:doi/10.48550/arxiv.1808.07703&rft_dat=%3Carxiv_GOX%3E1808_07703%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true