Robust Lane Detection via Expanded Self Attention
The image-based lane detection algorithm is one of the key technologies in autonomous vehicles. Modern deep learning methods achieve high performance in lane detection, but it is still difficult to accurately detect lanes in challenging situations such as congested roads and extreme lighting conditi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Lee, Minhyeok Lee, Junhyeop Lee, Dogyoon Kim, Woojin Hwang, Sangwon Lee, Sangyoun |
description | The image-based lane detection algorithm is one of the key technologies in
autonomous vehicles. Modern deep learning methods achieve high performance in
lane detection, but it is still difficult to accurately detect lanes in
challenging situations such as congested roads and extreme lighting conditions.
To be robust on these challenging situations, it is important to extract global
contextual information even from limited visual cues. In this paper, we propose
a simple but powerful self-attention mechanism optimized for lane detection
called the Expanded Self Attention (ESA) module. Inspired by the simple
geometric structure of lanes, the proposed method predicts the confidence of a
lane along the vertical and horizontal directions in an image. The prediction
of the confidence enables estimating occluded locations by extracting global
contextual information. ESA module can be easily implemented and applied to any
encoder-decoder-based model without increasing the inference time. The
performance of our method is evaluated on three popular lane detection
benchmarks (TuSimple, CULane and BDD100K). We achieve state-of-the-art
performance in CULane and BDD100K and distinct improvement on TuSimple dataset.
The experimental results show that our approach is robust to occlusion and
extreme lighting conditions. |
doi_str_mv | 10.48550/arxiv.2102.07037 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2102_07037</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2102_07037</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-5f4dc45b8ea2435177cac8e9c5de6b572ba5e17b109dc1efe4d2a75be40857273</originalsourceid><addsrcrecordid>eNotzstqwzAQhWFtsihuHqCr6gXsSrLGYy9DLm3AUGizNyNpDAbXCY5qnLdPk2Z1Fj8cPiFetMpsCaDeaJy7KTNamUyhyvFJ6K-j-z1HWdPAcsORfeyOg5w6ktv5REPgIL-5b-UqRh5u7VksWurPvHxsIg677WH9kdaf7_v1qk6pQEyhtcFbcCWTsTloRE--5MpD4MIBGkfAGp1WVfCaW7bBEIJjq8q_inkiXv9v7-bmNHY_NF6am7252_MrWQw-RA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Robust Lane Detection via Expanded Self Attention</title><source>arXiv.org</source><creator>Lee, Minhyeok ; Lee, Junhyeop ; Lee, Dogyoon ; Kim, Woojin ; Hwang, Sangwon ; Lee, Sangyoun</creator><creatorcontrib>Lee, Minhyeok ; Lee, Junhyeop ; Lee, Dogyoon ; Kim, Woojin ; Hwang, Sangwon ; Lee, Sangyoun</creatorcontrib><description>The image-based lane detection algorithm is one of the key technologies in
autonomous vehicles. Modern deep learning methods achieve high performance in
lane detection, but it is still difficult to accurately detect lanes in
challenging situations such as congested roads and extreme lighting conditions.
To be robust on these challenging situations, it is important to extract global
contextual information even from limited visual cues. In this paper, we propose
a simple but powerful self-attention mechanism optimized for lane detection
called the Expanded Self Attention (ESA) module. Inspired by the simple
geometric structure of lanes, the proposed method predicts the confidence of a
lane along the vertical and horizontal directions in an image. The prediction
of the confidence enables estimating occluded locations by extracting global
contextual information. ESA module can be easily implemented and applied to any
encoder-decoder-based model without increasing the inference time. The
performance of our method is evaluated on three popular lane detection
benchmarks (TuSimple, CULane and BDD100K). We achieve state-of-the-art
performance in CULane and BDD100K and distinct improvement on TuSimple dataset.
The experimental results show that our approach is robust to occlusion and
extreme lighting conditions.</description><identifier>DOI: 10.48550/arxiv.2102.07037</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2021-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2102.07037$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2102.07037$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lee, Minhyeok</creatorcontrib><creatorcontrib>Lee, Junhyeop</creatorcontrib><creatorcontrib>Lee, Dogyoon</creatorcontrib><creatorcontrib>Kim, Woojin</creatorcontrib><creatorcontrib>Hwang, Sangwon</creatorcontrib><creatorcontrib>Lee, Sangyoun</creatorcontrib><title>Robust Lane Detection via Expanded Self Attention</title><description>The image-based lane detection algorithm is one of the key technologies in
autonomous vehicles. Modern deep learning methods achieve high performance in
lane detection, but it is still difficult to accurately detect lanes in
challenging situations such as congested roads and extreme lighting conditions.
To be robust on these challenging situations, it is important to extract global
contextual information even from limited visual cues. In this paper, we propose
a simple but powerful self-attention mechanism optimized for lane detection
called the Expanded Self Attention (ESA) module. Inspired by the simple
geometric structure of lanes, the proposed method predicts the confidence of a
lane along the vertical and horizontal directions in an image. The prediction
of the confidence enables estimating occluded locations by extracting global
contextual information. ESA module can be easily implemented and applied to any
encoder-decoder-based model without increasing the inference time. The
performance of our method is evaluated on three popular lane detection
benchmarks (TuSimple, CULane and BDD100K). We achieve state-of-the-art
performance in CULane and BDD100K and distinct improvement on TuSimple dataset.
The experimental results show that our approach is robust to occlusion and
extreme lighting conditions.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzstqwzAQhWFtsihuHqCr6gXsSrLGYy9DLm3AUGizNyNpDAbXCY5qnLdPk2Z1Fj8cPiFetMpsCaDeaJy7KTNamUyhyvFJ6K-j-z1HWdPAcsORfeyOg5w6ktv5REPgIL-5b-UqRh5u7VksWurPvHxsIg677WH9kdaf7_v1qk6pQEyhtcFbcCWTsTloRE--5MpD4MIBGkfAGp1WVfCaW7bBEIJjq8q_inkiXv9v7-bmNHY_NF6am7252_MrWQw-RA</recordid><startdate>20210213</startdate><enddate>20210213</enddate><creator>Lee, Minhyeok</creator><creator>Lee, Junhyeop</creator><creator>Lee, Dogyoon</creator><creator>Kim, Woojin</creator><creator>Hwang, Sangwon</creator><creator>Lee, Sangyoun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210213</creationdate><title>Robust Lane Detection via Expanded Self Attention</title><author>Lee, Minhyeok ; Lee, Junhyeop ; Lee, Dogyoon ; Kim, Woojin ; Hwang, Sangwon ; Lee, Sangyoun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-5f4dc45b8ea2435177cac8e9c5de6b572ba5e17b109dc1efe4d2a75be40857273</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Lee, Minhyeok</creatorcontrib><creatorcontrib>Lee, Junhyeop</creatorcontrib><creatorcontrib>Lee, Dogyoon</creatorcontrib><creatorcontrib>Kim, Woojin</creatorcontrib><creatorcontrib>Hwang, Sangwon</creatorcontrib><creatorcontrib>Lee, Sangyoun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lee, Minhyeok</au><au>Lee, Junhyeop</au><au>Lee, Dogyoon</au><au>Kim, Woojin</au><au>Hwang, Sangwon</au><au>Lee, Sangyoun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust Lane Detection via Expanded Self Attention</atitle><date>2021-02-13</date><risdate>2021</risdate><abstract>The image-based lane detection algorithm is one of the key technologies in
autonomous vehicles. Modern deep learning methods achieve high performance in
lane detection, but it is still difficult to accurately detect lanes in
challenging situations such as congested roads and extreme lighting conditions.
To be robust on these challenging situations, it is important to extract global
contextual information even from limited visual cues. In this paper, we propose
a simple but powerful self-attention mechanism optimized for lane detection
called the Expanded Self Attention (ESA) module. Inspired by the simple
geometric structure of lanes, the proposed method predicts the confidence of a
lane along the vertical and horizontal directions in an image. The prediction
of the confidence enables estimating occluded locations by extracting global
contextual information. ESA module can be easily implemented and applied to any
encoder-decoder-based model without increasing the inference time. The
performance of our method is evaluated on three popular lane detection
benchmarks (TuSimple, CULane and BDD100K). We achieve state-of-the-art
performance in CULane and BDD100K and distinct improvement on TuSimple dataset.
The experimental results show that our approach is robust to occlusion and
extreme lighting conditions.</abstract><doi>10.48550/arxiv.2102.07037</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2102.07037 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2102_07037 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | Robust Lane Detection via Expanded Self Attention |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T10%3A58%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20Lane%20Detection%20via%20Expanded%20Self%20Attention&rft.au=Lee,%20Minhyeok&rft.date=2021-02-13&rft_id=info:doi/10.48550/arxiv.2102.07037&rft_dat=%3Carxiv_GOX%3E2102_07037%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |