Geo-Context Aware Study of Vision-Based Autonomous Driving Models and Spatial Video Data
Vision-based deep learning (DL) methods have made great progress in learning autonomous driving models from large-scale crowd-sourced video datasets. They are trained to predict instantaneous driving behaviors from video data captured by on-vehicle cameras. In this paper, we develop a geo-context aw...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on visualization and computer graphics 2022-01, Vol.28 (1), p.1019-1029 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1029 |
---|---|
container_issue | 1 |
container_start_page | 1019 |
container_title | IEEE transactions on visualization and computer graphics |
container_volume | 28 |
creator | Jamonnak, Suphanut Zhao, Ye Huang, Xinyi Amiruzzaman, Md |
description | Vision-based deep learning (DL) methods have made great progress in learning autonomous driving models from large-scale crowd-sourced video datasets. They are trained to predict instantaneous driving behaviors from video data captured by on-vehicle cameras. In this paper, we develop a geo-context aware visualization system for the study of Autonomous Driving Model (ADM) predictions together with large-scale ADM video data. The visual study is seamlessly integrated with the geographical environment by combining DL model performance with geospatial visualization techniques. Model performance measures can be studied together with a set of geospatial attributes over map views. Users can also discover and compare prediction behaviors of multiple DL models in both city-wide and street-level analysis, together with road images and video contents. Therefore, the system provides a new visual exploration platform for DL model designers in autonomous driving. Use cases and domain expert evaluation show the utility and effectiveness of the visualization system. |
doi_str_mv | 10.1109/TVCG.2021.3114853 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2613369280</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9555830</ieee_id><sourcerecordid>2578781281</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-c5fb014fe9c9c131aaae5fde4b755eab4221eb9bea5949726093c2bad438fa763</originalsourceid><addsrcrecordid>eNpdkE1r20AQhpeS0Hz1B5RCWcilF7k7-yXt0bFbN5CSQz7IbRlJo6Igax2t1Cb_Pmvs5pDTDMzzDi8PY59BzACE-357v1jNpJAwUwC6MOoDOwanIRNG2IO0izzPpJX2iJ3E-CgEaF24j-xIaeOs0faYPawoZIvQj_Q88vk_HIjfjFP9wkPD79vYhj67wEg1n09j6MM6TJEvh_Zv2__hv0NNXeTY1_xmg2OLXYrUFPgSRzxjhw12kT7t5ym7-_njdvEru7peXS7mV1mltBuzyjRlqtWQq1wFChCRTFOTLnNjCEstJVDpSkLjtMulFU5VssRaq6LB3KpT9m33dzOEp4ni6NdtrKjrsKdU1kuTF3kBsoCEnr9DH8M09KmdlxaUsk4WIlGwo6ohxDhQ4zdDu8bhxYPwW-1-q91vtfu99pT5uv88lWuq3xL_PSfgyw5oiejt7IwxhRLqFR0EhaU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2613369280</pqid></control><display><type>article</type><title>Geo-Context Aware Study of Vision-Based Autonomous Driving Models and Spatial Video Data</title><source>IEEE Electronic Library (IEL)</source><creator>Jamonnak, Suphanut ; Zhao, Ye ; Huang, Xinyi ; Amiruzzaman, Md</creator><creatorcontrib>Jamonnak, Suphanut ; Zhao, Ye ; Huang, Xinyi ; Amiruzzaman, Md</creatorcontrib><description>Vision-based deep learning (DL) methods have made great progress in learning autonomous driving models from large-scale crowd-sourced video datasets. They are trained to predict instantaneous driving behaviors from video data captured by on-vehicle cameras. In this paper, we develop a geo-context aware visualization system for the study of Autonomous Driving Model (ADM) predictions together with large-scale ADM video data. The visual study is seamlessly integrated with the geographical environment by combining DL model performance with geospatial visualization techniques. Model performance measures can be studied together with a set of geospatial attributes over map views. Users can also discover and compare prediction behaviors of multiple DL models in both city-wide and street-level analysis, together with road images and video contents. Therefore, the system provides a new visual exploration platform for DL model designers in autonomous driving. Use cases and domain expert evaluation show the utility and effectiveness of the visualization system.</description><identifier>ISSN: 1077-2626</identifier><identifier>EISSN: 1941-0506</identifier><identifier>DOI: 10.1109/TVCG.2021.3114853</identifier><identifier>PMID: 34596546</identifier><identifier>CODEN: ITVGEA</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Analytical models ; Autonomous Driving ; Autonomous vehicles ; Computational modeling ; Context ; Data models ; Data visualization ; Deep learning ; Predictive models ; Spatial data ; Spatial Video ; Video data ; Vision ; Vision-based Deep Learning Models ; Visualization ; Visualization System</subject><ispartof>IEEE transactions on visualization and computer graphics, 2022-01, Vol.28 (1), p.1019-1029</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-c5fb014fe9c9c131aaae5fde4b755eab4221eb9bea5949726093c2bad438fa763</citedby><cites>FETCH-LOGICAL-c349t-c5fb014fe9c9c131aaae5fde4b755eab4221eb9bea5949726093c2bad438fa763</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9555830$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9555830$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34596546$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Jamonnak, Suphanut</creatorcontrib><creatorcontrib>Zhao, Ye</creatorcontrib><creatorcontrib>Huang, Xinyi</creatorcontrib><creatorcontrib>Amiruzzaman, Md</creatorcontrib><title>Geo-Context Aware Study of Vision-Based Autonomous Driving Models and Spatial Video Data</title><title>IEEE transactions on visualization and computer graphics</title><addtitle>TVCG</addtitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><description>Vision-based deep learning (DL) methods have made great progress in learning autonomous driving models from large-scale crowd-sourced video datasets. They are trained to predict instantaneous driving behaviors from video data captured by on-vehicle cameras. In this paper, we develop a geo-context aware visualization system for the study of Autonomous Driving Model (ADM) predictions together with large-scale ADM video data. The visual study is seamlessly integrated with the geographical environment by combining DL model performance with geospatial visualization techniques. Model performance measures can be studied together with a set of geospatial attributes over map views. Users can also discover and compare prediction behaviors of multiple DL models in both city-wide and street-level analysis, together with road images and video contents. Therefore, the system provides a new visual exploration platform for DL model designers in autonomous driving. Use cases and domain expert evaluation show the utility and effectiveness of the visualization system.</description><subject>Analytical models</subject><subject>Autonomous Driving</subject><subject>Autonomous vehicles</subject><subject>Computational modeling</subject><subject>Context</subject><subject>Data models</subject><subject>Data visualization</subject><subject>Deep learning</subject><subject>Predictive models</subject><subject>Spatial data</subject><subject>Spatial Video</subject><subject>Video data</subject><subject>Vision</subject><subject>Vision-based Deep Learning Models</subject><subject>Visualization</subject><subject>Visualization System</subject><issn>1077-2626</issn><issn>1941-0506</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1r20AQhpeS0Hz1B5RCWcilF7k7-yXt0bFbN5CSQz7IbRlJo6Igax2t1Cb_Pmvs5pDTDMzzDi8PY59BzACE-357v1jNpJAwUwC6MOoDOwanIRNG2IO0izzPpJX2iJ3E-CgEaF24j-xIaeOs0faYPawoZIvQj_Q88vk_HIjfjFP9wkPD79vYhj67wEg1n09j6MM6TJEvh_Zv2__hv0NNXeTY1_xmg2OLXYrUFPgSRzxjhw12kT7t5ym7-_njdvEru7peXS7mV1mltBuzyjRlqtWQq1wFChCRTFOTLnNjCEstJVDpSkLjtMulFU5VssRaq6LB3KpT9m33dzOEp4ni6NdtrKjrsKdU1kuTF3kBsoCEnr9DH8M09KmdlxaUsk4WIlGwo6ohxDhQ4zdDu8bhxYPwW-1-q91vtfu99pT5uv88lWuq3xL_PSfgyw5oiejt7IwxhRLqFR0EhaU</recordid><startdate>202201</startdate><enddate>202201</enddate><creator>Jamonnak, Suphanut</creator><creator>Zhao, Ye</creator><creator>Huang, Xinyi</creator><creator>Amiruzzaman, Md</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope></search><sort><creationdate>202201</creationdate><title>Geo-Context Aware Study of Vision-Based Autonomous Driving Models and Spatial Video Data</title><author>Jamonnak, Suphanut ; Zhao, Ye ; Huang, Xinyi ; Amiruzzaman, Md</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-c5fb014fe9c9c131aaae5fde4b755eab4221eb9bea5949726093c2bad438fa763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Analytical models</topic><topic>Autonomous Driving</topic><topic>Autonomous vehicles</topic><topic>Computational modeling</topic><topic>Context</topic><topic>Data models</topic><topic>Data visualization</topic><topic>Deep learning</topic><topic>Predictive models</topic><topic>Spatial data</topic><topic>Spatial Video</topic><topic>Video data</topic><topic>Vision</topic><topic>Vision-based Deep Learning Models</topic><topic>Visualization</topic><topic>Visualization System</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jamonnak, Suphanut</creatorcontrib><creatorcontrib>Zhao, Ye</creatorcontrib><creatorcontrib>Huang, Xinyi</creatorcontrib><creatorcontrib>Amiruzzaman, Md</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on visualization and computer graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jamonnak, Suphanut</au><au>Zhao, Ye</au><au>Huang, Xinyi</au><au>Amiruzzaman, Md</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Geo-Context Aware Study of Vision-Based Autonomous Driving Models and Spatial Video Data</atitle><jtitle>IEEE transactions on visualization and computer graphics</jtitle><stitle>TVCG</stitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><date>2022-01</date><risdate>2022</risdate><volume>28</volume><issue>1</issue><spage>1019</spage><epage>1029</epage><pages>1019-1029</pages><issn>1077-2626</issn><eissn>1941-0506</eissn><coden>ITVGEA</coden><abstract>Vision-based deep learning (DL) methods have made great progress in learning autonomous driving models from large-scale crowd-sourced video datasets. They are trained to predict instantaneous driving behaviors from video data captured by on-vehicle cameras. In this paper, we develop a geo-context aware visualization system for the study of Autonomous Driving Model (ADM) predictions together with large-scale ADM video data. The visual study is seamlessly integrated with the geographical environment by combining DL model performance with geospatial visualization techniques. Model performance measures can be studied together with a set of geospatial attributes over map views. Users can also discover and compare prediction behaviors of multiple DL models in both city-wide and street-level analysis, together with road images and video contents. Therefore, the system provides a new visual exploration platform for DL model designers in autonomous driving. Use cases and domain expert evaluation show the utility and effectiveness of the visualization system.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>34596546</pmid><doi>10.1109/TVCG.2021.3114853</doi><tpages>11</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1077-2626 |
ispartof | IEEE transactions on visualization and computer graphics, 2022-01, Vol.28 (1), p.1019-1029 |
issn | 1077-2626 1941-0506 |
language | eng |
recordid | cdi_proquest_journals_2613369280 |
source | IEEE Electronic Library (IEL) |
subjects | Analytical models Autonomous Driving Autonomous vehicles Computational modeling Context Data models Data visualization Deep learning Predictive models Spatial data Spatial Video Video data Vision Vision-based Deep Learning Models Visualization Visualization System |
title | Geo-Context Aware Study of Vision-Based Autonomous Driving Models and Spatial Video Data |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T03%3A37%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Geo-Context%20Aware%20Study%20of%20Vision-Based%20Autonomous%20Driving%20Models%20and%20Spatial%20Video%20Data&rft.jtitle=IEEE%20transactions%20on%20visualization%20and%20computer%20graphics&rft.au=Jamonnak,%20Suphanut&rft.date=2022-01&rft.volume=28&rft.issue=1&rft.spage=1019&rft.epage=1029&rft.pages=1019-1029&rft.issn=1077-2626&rft.eissn=1941-0506&rft.coden=ITVGEA&rft_id=info:doi/10.1109/TVCG.2021.3114853&rft_dat=%3Cproquest_RIE%3E2578781281%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2613369280&rft_id=info:pmid/34596546&rft_ieee_id=9555830&rfr_iscdi=true |