BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle Segmentation and Ego Vehicle Trajectory Prediction
Trajectory prediction is, naturally, a key task for vehicle autonomy. While the number of traffic rules is limited, the combinations and uncertainties associated with each agent's behaviour in real-world scenarios are nearly impossible to encode. Consequently, there is a growing interest in lea...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Sharma, Sushil Das, Arindam Sistu, Ganesh Halton, Mark Eising, Ciarán |
description | Trajectory prediction is, naturally, a key task for vehicle autonomy. While
the number of traffic rules is limited, the combinations and uncertainties
associated with each agent's behaviour in real-world scenarios are nearly
impossible to encode. Consequently, there is a growing interest in
learning-based trajectory prediction. The proposed method in this paper
predicts trajectories by considering perception and trajectory prediction as a
unified system. In considering them as unified tasks, we show that there is the
potential to improve the performance of perception. To achieve these goals, we
present BEVSeg2TP - a surround-view camera bird's-eye-view-based joint vehicle
segmentation and ego vehicle trajectory prediction system for autonomous
vehicles. The proposed system uses a network trained on multiple camera views.
The images are transformed using several deep learning techniques to perform
semantic segmentation of objects, including other vehicles, in the scene. The
segmentation outputs are fused across the camera views to obtain a
comprehensive representation of the surrounding vehicles from the
bird's-eye-view perspective. The system further predicts the future trajectory
of the ego vehicle using a spatiotemporal probabilistic network (STPN) to
optimize trajectory prediction. This network leverages information from
encoder-decoder transformers and joint vehicle segmentation. |
doi_str_mv | 10.48550/arxiv.2312.13081 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_13081</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_13081</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-51076e2a50a118bf6785e38af693c1be5d451990f471877bbceb336afa7f18ed3</originalsourceid><addsrcrecordid>eNo9kMFPgzAYxXvxYKZ_gCd78wS2lNLiTQjqzBKXjHAlH_TrrBlgCpvuv3dD4-kl7738kvcIueEsjLWU7B78tzuEkeBRyAXT_JIcsqLa4DYq1w90s_d-2PeGVg6_aA4deqCZ8-ZuDIojBrOdwYiGvg6un2iF767dIT0BOuwnmNzQUzgBiu3wH5YePrCdBn-ka4_GtefWFbmwsBvx-k8XpHwqyvwlWL09L_PHVQCJ4oHkTCUYgWTAuW5sorREocEmqWh5g9LEkqcps7HiWqmmabERIgELynKNRizI7S92Hl5_eteBP9bnA-r5APEDmztV7Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle Segmentation and Ego Vehicle Trajectory Prediction</title><source>arXiv.org</source><creator>Sharma, Sushil ; Das, Arindam ; Sistu, Ganesh ; Halton, Mark ; Eising, Ciarán</creator><creatorcontrib>Sharma, Sushil ; Das, Arindam ; Sistu, Ganesh ; Halton, Mark ; Eising, Ciarán</creatorcontrib><description>Trajectory prediction is, naturally, a key task for vehicle autonomy. While
the number of traffic rules is limited, the combinations and uncertainties
associated with each agent's behaviour in real-world scenarios are nearly
impossible to encode. Consequently, there is a growing interest in
learning-based trajectory prediction. The proposed method in this paper
predicts trajectories by considering perception and trajectory prediction as a
unified system. In considering them as unified tasks, we show that there is the
potential to improve the performance of perception. To achieve these goals, we
present BEVSeg2TP - a surround-view camera bird's-eye-view-based joint vehicle
segmentation and ego vehicle trajectory prediction system for autonomous
vehicles. The proposed system uses a network trained on multiple camera views.
The images are transformed using several deep learning techniques to perform
semantic segmentation of objects, including other vehicles, in the scene. The
segmentation outputs are fused across the camera views to obtain a
comprehensive representation of the surrounding vehicles from the
bird's-eye-view perspective. The system further predicts the future trajectory
of the ego vehicle using a spatiotemporal probabilistic network (STPN) to
optimize trajectory prediction. This network leverages information from
encoder-decoder transformers and joint vehicle segmentation.</description><identifier>DOI: 10.48550/arxiv.2312.13081</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.13081$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.13081$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sharma, Sushil</creatorcontrib><creatorcontrib>Das, Arindam</creatorcontrib><creatorcontrib>Sistu, Ganesh</creatorcontrib><creatorcontrib>Halton, Mark</creatorcontrib><creatorcontrib>Eising, Ciarán</creatorcontrib><title>BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle Segmentation and Ego Vehicle Trajectory Prediction</title><description>Trajectory prediction is, naturally, a key task for vehicle autonomy. While
the number of traffic rules is limited, the combinations and uncertainties
associated with each agent's behaviour in real-world scenarios are nearly
impossible to encode. Consequently, there is a growing interest in
learning-based trajectory prediction. The proposed method in this paper
predicts trajectories by considering perception and trajectory prediction as a
unified system. In considering them as unified tasks, we show that there is the
potential to improve the performance of perception. To achieve these goals, we
present BEVSeg2TP - a surround-view camera bird's-eye-view-based joint vehicle
segmentation and ego vehicle trajectory prediction system for autonomous
vehicles. The proposed system uses a network trained on multiple camera views.
The images are transformed using several deep learning techniques to perform
semantic segmentation of objects, including other vehicles, in the scene. The
segmentation outputs are fused across the camera views to obtain a
comprehensive representation of the surrounding vehicles from the
bird's-eye-view perspective. The system further predicts the future trajectory
of the ego vehicle using a spatiotemporal probabilistic network (STPN) to
optimize trajectory prediction. This network leverages information from
encoder-decoder transformers and joint vehicle segmentation.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo9kMFPgzAYxXvxYKZ_gCd78wS2lNLiTQjqzBKXjHAlH_TrrBlgCpvuv3dD4-kl7738kvcIueEsjLWU7B78tzuEkeBRyAXT_JIcsqLa4DYq1w90s_d-2PeGVg6_aA4deqCZ8-ZuDIojBrOdwYiGvg6un2iF767dIT0BOuwnmNzQUzgBiu3wH5YePrCdBn-ka4_GtefWFbmwsBvx-k8XpHwqyvwlWL09L_PHVQCJ4oHkTCUYgWTAuW5sorREocEmqWh5g9LEkqcps7HiWqmmabERIgELynKNRizI7S92Hl5_eteBP9bnA-r5APEDmztV7Q</recordid><startdate>20231220</startdate><enddate>20231220</enddate><creator>Sharma, Sushil</creator><creator>Das, Arindam</creator><creator>Sistu, Ganesh</creator><creator>Halton, Mark</creator><creator>Eising, Ciarán</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231220</creationdate><title>BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle Segmentation and Ego Vehicle Trajectory Prediction</title><author>Sharma, Sushil ; Das, Arindam ; Sistu, Ganesh ; Halton, Mark ; Eising, Ciarán</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-51076e2a50a118bf6785e38af693c1be5d451990f471877bbceb336afa7f18ed3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Sharma, Sushil</creatorcontrib><creatorcontrib>Das, Arindam</creatorcontrib><creatorcontrib>Sistu, Ganesh</creatorcontrib><creatorcontrib>Halton, Mark</creatorcontrib><creatorcontrib>Eising, Ciarán</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sharma, Sushil</au><au>Das, Arindam</au><au>Sistu, Ganesh</au><au>Halton, Mark</au><au>Eising, Ciarán</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle Segmentation and Ego Vehicle Trajectory Prediction</atitle><date>2023-12-20</date><risdate>2023</risdate><abstract>Trajectory prediction is, naturally, a key task for vehicle autonomy. While
the number of traffic rules is limited, the combinations and uncertainties
associated with each agent's behaviour in real-world scenarios are nearly
impossible to encode. Consequently, there is a growing interest in
learning-based trajectory prediction. The proposed method in this paper
predicts trajectories by considering perception and trajectory prediction as a
unified system. In considering them as unified tasks, we show that there is the
potential to improve the performance of perception. To achieve these goals, we
present BEVSeg2TP - a surround-view camera bird's-eye-view-based joint vehicle
segmentation and ego vehicle trajectory prediction system for autonomous
vehicles. The proposed system uses a network trained on multiple camera views.
The images are transformed using several deep learning techniques to perform
semantic segmentation of objects, including other vehicles, in the scene. The
segmentation outputs are fused across the camera views to obtain a
comprehensive representation of the surrounding vehicles from the
bird's-eye-view perspective. The system further predicts the future trajectory
of the ego vehicle using a spatiotemporal probabilistic network (STPN) to
optimize trajectory prediction. This network leverages information from
encoder-decoder transformers and joint vehicle segmentation.</abstract><doi>10.48550/arxiv.2312.13081</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2312.13081 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2312_13081 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle Segmentation and Ego Vehicle Trajectory Prediction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T15%3A57%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=BEVSeg2TP:%20Surround%20View%20Camera%20Bird's-Eye-View%20Based%20Joint%20Vehicle%20Segmentation%20and%20Ego%20Vehicle%20Trajectory%20Prediction&rft.au=Sharma,%20Sushil&rft.date=2023-12-20&rft_id=info:doi/10.48550/arxiv.2312.13081&rft_dat=%3Carxiv_GOX%3E2312_13081%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |