Optical Flow for Autonomous Driving: Applications, Challenges and Improvements

Optical flow estimation is a well-studied topic for automated driving applications. Many outstanding optical flow estimation methods have been proposed, but they become erroneous when tested in challenging scenarios that are commonly encountered. Despite the increasing use of fisheye cameras for nea...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shen, Shihao, Kerofsky, Louis, Yogamani, Senthil
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shen, Shihao
Kerofsky, Louis
Yogamani, Senthil
description Optical flow estimation is a well-studied topic for automated driving applications. Many outstanding optical flow estimation methods have been proposed, but they become erroneous when tested in challenging scenarios that are commonly encountered. Despite the increasing use of fisheye cameras for near-field sensing in automated driving, there is very limited literature on optical flow estimation with strong lens distortion. Thus we propose and evaluate training strategies to improve a learning-based optical flow algorithm by leveraging the only existing fisheye dataset with optical flow ground truth. While trained with synthetic data, the model demonstrates strong capabilities to generalize to real world fisheye data. The other challenge neglected by existing state-of-the-art algorithms is low light. We propose a novel, generic semi-supervised framework that significantly boosts performances of existing methods in such conditions. To the best of our knowledge, this is the first approach that explicitly handles optical flow estimation in low light.
doi_str_mv 10.48550/arxiv.2301.04422
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2301_04422</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2301_04422</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-69146c93fba71f63232271feadeba14c2ea1258042ac2340708fdb094f4f840b3</originalsourceid><addsrcrecordid>eNotz71OwzAUBWAvDKjwAEz4AZpgXzt_bFGgUKmiS_foJrFbS45tOWmAt6cUpnOGoyN9hDxwlsoyy9gTxi-zpCAYT5mUALfkYx9m06OlG-s_qfaR1ufZOz_680RfolmMOz7TOgR7Wc3Gu2lNmxNaq9xRTRTdQLdjiH5Ro3LzdEduNNpJ3f_nihw2r4fmPdnt37ZNvUswLyDJKy7zvhK6w4LrXIAAuBSFg-qQyx4UcshKJgF7EJIVrNRDxyqppS4l68SKPP7dXkFtiGbE-N3-wtorTPwAk9dIpw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Optical Flow for Autonomous Driving: Applications, Challenges and Improvements</title><source>arXiv.org</source><creator>Shen, Shihao ; Kerofsky, Louis ; Yogamani, Senthil</creator><creatorcontrib>Shen, Shihao ; Kerofsky, Louis ; Yogamani, Senthil</creatorcontrib><description>Optical flow estimation is a well-studied topic for automated driving applications. Many outstanding optical flow estimation methods have been proposed, but they become erroneous when tested in challenging scenarios that are commonly encountered. Despite the increasing use of fisheye cameras for near-field sensing in automated driving, there is very limited literature on optical flow estimation with strong lens distortion. Thus we propose and evaluate training strategies to improve a learning-based optical flow algorithm by leveraging the only existing fisheye dataset with optical flow ground truth. While trained with synthetic data, the model demonstrates strong capabilities to generalize to real world fisheye data. The other challenge neglected by existing state-of-the-art algorithms is low light. We propose a novel, generic semi-supervised framework that significantly boosts performances of existing methods in such conditions. To the best of our knowledge, this is the first approach that explicitly handles optical flow estimation in low light.</description><identifier>DOI: 10.48550/arxiv.2301.04422</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2023-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2301.04422$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2301.04422$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shen, Shihao</creatorcontrib><creatorcontrib>Kerofsky, Louis</creatorcontrib><creatorcontrib>Yogamani, Senthil</creatorcontrib><title>Optical Flow for Autonomous Driving: Applications, Challenges and Improvements</title><description>Optical flow estimation is a well-studied topic for automated driving applications. Many outstanding optical flow estimation methods have been proposed, but they become erroneous when tested in challenging scenarios that are commonly encountered. Despite the increasing use of fisheye cameras for near-field sensing in automated driving, there is very limited literature on optical flow estimation with strong lens distortion. Thus we propose and evaluate training strategies to improve a learning-based optical flow algorithm by leveraging the only existing fisheye dataset with optical flow ground truth. While trained with synthetic data, the model demonstrates strong capabilities to generalize to real world fisheye data. The other challenge neglected by existing state-of-the-art algorithms is low light. We propose a novel, generic semi-supervised framework that significantly boosts performances of existing methods in such conditions. To the best of our knowledge, this is the first approach that explicitly handles optical flow estimation in low light.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUBWAvDKjwAEz4AZpgXzt_bFGgUKmiS_foJrFbS45tOWmAt6cUpnOGoyN9hDxwlsoyy9gTxi-zpCAYT5mUALfkYx9m06OlG-s_qfaR1ufZOz_680RfolmMOz7TOgR7Wc3Gu2lNmxNaq9xRTRTdQLdjiH5Ro3LzdEduNNpJ3f_nihw2r4fmPdnt37ZNvUswLyDJKy7zvhK6w4LrXIAAuBSFg-qQyx4UcshKJgF7EJIVrNRDxyqppS4l68SKPP7dXkFtiGbE-N3-wtorTPwAk9dIpw</recordid><startdate>20230111</startdate><enddate>20230111</enddate><creator>Shen, Shihao</creator><creator>Kerofsky, Louis</creator><creator>Yogamani, Senthil</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230111</creationdate><title>Optical Flow for Autonomous Driving: Applications, Challenges and Improvements</title><author>Shen, Shihao ; Kerofsky, Louis ; Yogamani, Senthil</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-69146c93fba71f63232271feadeba14c2ea1258042ac2340708fdb094f4f840b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Shen, Shihao</creatorcontrib><creatorcontrib>Kerofsky, Louis</creatorcontrib><creatorcontrib>Yogamani, Senthil</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shen, Shihao</au><au>Kerofsky, Louis</au><au>Yogamani, Senthil</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optical Flow for Autonomous Driving: Applications, Challenges and Improvements</atitle><date>2023-01-11</date><risdate>2023</risdate><abstract>Optical flow estimation is a well-studied topic for automated driving applications. Many outstanding optical flow estimation methods have been proposed, but they become erroneous when tested in challenging scenarios that are commonly encountered. Despite the increasing use of fisheye cameras for near-field sensing in automated driving, there is very limited literature on optical flow estimation with strong lens distortion. Thus we propose and evaluate training strategies to improve a learning-based optical flow algorithm by leveraging the only existing fisheye dataset with optical flow ground truth. While trained with synthetic data, the model demonstrates strong capabilities to generalize to real world fisheye data. The other challenge neglected by existing state-of-the-art algorithms is low light. We propose a novel, generic semi-supervised framework that significantly boosts performances of existing methods in such conditions. To the best of our knowledge, this is the first approach that explicitly handles optical flow estimation in low light.</abstract><doi>10.48550/arxiv.2301.04422</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2301.04422
ispartof
issn
language eng
recordid cdi_arxiv_primary_2301_04422
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Robotics
title Optical Flow for Autonomous Driving: Applications, Challenges and Improvements
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T04%3A53%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optical%20Flow%20for%20Autonomous%20Driving:%20Applications,%20Challenges%20and%20Improvements&rft.au=Shen,%20Shihao&rft.date=2023-01-11&rft_id=info:doi/10.48550/arxiv.2301.04422&rft_dat=%3Carxiv_GOX%3E2301_04422%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true