Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study

With the rapid development of intelligent vehicles and Advanced Driver-Assistance Systems (ADAS), a new trend is that mixed levels of human driver engagements will be involved in the transportation system. Therefore, necessary visual guidance for drivers is vitally important under this situation to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent vehicles 2022-06, Vol.7 (2), p.210-220
Hauptverfasser: Liu, Yongkang, Wang, Ziran, Han, Kyungtae, Shou, Zhenyu, Tiwari, Prashant, Hansen, John H. L.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 220
container_issue 2
container_start_page 210
container_title IEEE transactions on intelligent vehicles
container_volume 7
creator Liu, Yongkang
Wang, Ziran
Han, Kyungtae
Shou, Zhenyu
Tiwari, Prashant
Hansen, John H. L.
description With the rapid development of intelligent vehicles and Advanced Driver-Assistance Systems (ADAS), a new trend is that mixed levels of human driver engagements will be involved in the transportation system. Therefore, necessary visual guidance for drivers is vitally important under this situation to prevent potential risks. To advance the development of visual guidance systems, we introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions. Target vehicle bounding box is drawn and matched with the help of the object detector (running on the ego-vehicle) and position information (received from the cloud). The best matching result, a 79.2% accuracy under 0.7 intersection over union threshold, is obtained with depth images served as an additional feature source. A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology. In the case study, a multi-layer perceptron algorithm is proposed with modified lane change prediction approaches. Human-in-the-loop simulation results obtained from the Unity game engine reveal that the proposed model can improve highway driving performance significantly in terms of safety, comfort, and environmental sustainability.
doi_str_mv 10.1109/TIV.2021.3103695
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TIV_2021_3103695</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9511277</ieee_id><sourcerecordid>2688684588</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-40bf2733d3cb98c01e0bb596a9c81db14451912be9d6e613c18257556fd996343</originalsourceid><addsrcrecordid>eNo9kMFLwzAUh4MoOObugpeA5868pEkTb6VzKgwUNncNaZJqx2xn0h7239uy6en9eHy_9-BD6BbIHICoh83rdk4JhTkDwoTiF2hCWaYSqUh6-Zcll9doFuOOEAJCUknUBC23dazbJin2be_wwnQGL_txg6s24HyRrx9xjlem8bj4Ms2nx-_Bu9p2I1KY6PG6693xBl1VZh_97Dyn6GP5tClektXb82uRrxJLFXRJSsqKZow5ZkslLQFPypIrYZSV4EpIUw4KaOmVE14AsyApzzgXlVNKsJRN0f3p7iG0P72Pnd61fWiGl5oKKYVMuZQDRU6UDW2MwVf6EOpvE44aiB6F6UGYHoXps7Chcneq1N77f1xxAJpl7BcI8GMm</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2688684588</pqid></control><display><type>article</type><title>Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study</title><source>IEEE Electronic Library (IEL)</source><creator>Liu, Yongkang ; Wang, Ziran ; Han, Kyungtae ; Shou, Zhenyu ; Tiwari, Prashant ; Hansen, John H. L.</creator><creatorcontrib>Liu, Yongkang ; Wang, Ziran ; Han, Kyungtae ; Shou, Zhenyu ; Tiwari, Prashant ; Hansen, John H. L.</creatorcontrib><description>With the rapid development of intelligent vehicles and Advanced Driver-Assistance Systems (ADAS), a new trend is that mixed levels of human driver engagements will be involved in the transportation system. Therefore, necessary visual guidance for drivers is vitally important under this situation to prevent potential risks. To advance the development of visual guidance systems, we introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions. Target vehicle bounding box is drawn and matched with the help of the object detector (running on the ego-vehicle) and position information (received from the cloud). The best matching result, a 79.2% accuracy under 0.7 intersection over union threshold, is obtained with depth images served as an additional feature source. A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology. In the case study, a multi-layer perceptron algorithm is proposed with modified lane change prediction approaches. Human-in-the-loop simulation results obtained from the Unity game engine reveal that the proposed model can improve highway driving performance significantly in terms of safety, comfort, and environmental sustainability.</description><identifier>ISSN: 2379-8858</identifier><identifier>EISSN: 2379-8904</identifier><identifier>DOI: 10.1109/TIV.2021.3103695</identifier><identifier>CODEN: ITIVBL</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>ADAS ; Advanced driver assistance systems ; Algorithms ; Cameras ; Case studies ; Cloud computing ; computer vision ; data fusion ; Data integration ; Digital imaging ; Digital twin ; Guidance systems ; Intelligent vehicles ; lane change ; Lane changing ; Multilayer perceptrons ; Multilayers ; Object detection ; Prediction algorithms ; Transportation systems ; Vehicles ; Vision</subject><ispartof>IEEE transactions on intelligent vehicles, 2022-06, Vol.7 (2), p.210-220</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-40bf2733d3cb98c01e0bb596a9c81db14451912be9d6e613c18257556fd996343</citedby><cites>FETCH-LOGICAL-c291t-40bf2733d3cb98c01e0bb596a9c81db14451912be9d6e613c18257556fd996343</cites><orcidid>0000-0003-2702-7150 ; 0000-0001-8291-5025 ; 0000-0001-6677-5122 ; 0000-0003-1382-9929</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9511277$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9511277$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liu, Yongkang</creatorcontrib><creatorcontrib>Wang, Ziran</creatorcontrib><creatorcontrib>Han, Kyungtae</creatorcontrib><creatorcontrib>Shou, Zhenyu</creatorcontrib><creatorcontrib>Tiwari, Prashant</creatorcontrib><creatorcontrib>Hansen, John H. L.</creatorcontrib><title>Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study</title><title>IEEE transactions on intelligent vehicles</title><addtitle>TIV</addtitle><description>With the rapid development of intelligent vehicles and Advanced Driver-Assistance Systems (ADAS), a new trend is that mixed levels of human driver engagements will be involved in the transportation system. Therefore, necessary visual guidance for drivers is vitally important under this situation to prevent potential risks. To advance the development of visual guidance systems, we introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions. Target vehicle bounding box is drawn and matched with the help of the object detector (running on the ego-vehicle) and position information (received from the cloud). The best matching result, a 79.2% accuracy under 0.7 intersection over union threshold, is obtained with depth images served as an additional feature source. A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology. In the case study, a multi-layer perceptron algorithm is proposed with modified lane change prediction approaches. Human-in-the-loop simulation results obtained from the Unity game engine reveal that the proposed model can improve highway driving performance significantly in terms of safety, comfort, and environmental sustainability.</description><subject>ADAS</subject><subject>Advanced driver assistance systems</subject><subject>Algorithms</subject><subject>Cameras</subject><subject>Case studies</subject><subject>Cloud computing</subject><subject>computer vision</subject><subject>data fusion</subject><subject>Data integration</subject><subject>Digital imaging</subject><subject>Digital twin</subject><subject>Guidance systems</subject><subject>Intelligent vehicles</subject><subject>lane change</subject><subject>Lane changing</subject><subject>Multilayer perceptrons</subject><subject>Multilayers</subject><subject>Object detection</subject><subject>Prediction algorithms</subject><subject>Transportation systems</subject><subject>Vehicles</subject><subject>Vision</subject><issn>2379-8858</issn><issn>2379-8904</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMFLwzAUh4MoOObugpeA5868pEkTb6VzKgwUNncNaZJqx2xn0h7239uy6en9eHy_9-BD6BbIHICoh83rdk4JhTkDwoTiF2hCWaYSqUh6-Zcll9doFuOOEAJCUknUBC23dazbJin2be_wwnQGL_txg6s24HyRrx9xjlem8bj4Ms2nx-_Bu9p2I1KY6PG6693xBl1VZh_97Dyn6GP5tClektXb82uRrxJLFXRJSsqKZow5ZkslLQFPypIrYZSV4EpIUw4KaOmVE14AsyApzzgXlVNKsJRN0f3p7iG0P72Pnd61fWiGl5oKKYVMuZQDRU6UDW2MwVf6EOpvE44aiB6F6UGYHoXps7Chcneq1N77f1xxAJpl7BcI8GMm</recordid><startdate>20220601</startdate><enddate>20220601</enddate><creator>Liu, Yongkang</creator><creator>Wang, Ziran</creator><creator>Han, Kyungtae</creator><creator>Shou, Zhenyu</creator><creator>Tiwari, Prashant</creator><creator>Hansen, John H. L.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0003-2702-7150</orcidid><orcidid>https://orcid.org/0000-0001-8291-5025</orcidid><orcidid>https://orcid.org/0000-0001-6677-5122</orcidid><orcidid>https://orcid.org/0000-0003-1382-9929</orcidid></search><sort><creationdate>20220601</creationdate><title>Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study</title><author>Liu, Yongkang ; Wang, Ziran ; Han, Kyungtae ; Shou, Zhenyu ; Tiwari, Prashant ; Hansen, John H. L.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-40bf2733d3cb98c01e0bb596a9c81db14451912be9d6e613c18257556fd996343</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>ADAS</topic><topic>Advanced driver assistance systems</topic><topic>Algorithms</topic><topic>Cameras</topic><topic>Case studies</topic><topic>Cloud computing</topic><topic>computer vision</topic><topic>data fusion</topic><topic>Data integration</topic><topic>Digital imaging</topic><topic>Digital twin</topic><topic>Guidance systems</topic><topic>Intelligent vehicles</topic><topic>lane change</topic><topic>Lane changing</topic><topic>Multilayer perceptrons</topic><topic>Multilayers</topic><topic>Object detection</topic><topic>Prediction algorithms</topic><topic>Transportation systems</topic><topic>Vehicles</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yongkang</creatorcontrib><creatorcontrib>Wang, Ziran</creatorcontrib><creatorcontrib>Han, Kyungtae</creatorcontrib><creatorcontrib>Shou, Zhenyu</creatorcontrib><creatorcontrib>Tiwari, Prashant</creatorcontrib><creatorcontrib>Hansen, John H. L.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on intelligent vehicles</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Yongkang</au><au>Wang, Ziran</au><au>Han, Kyungtae</au><au>Shou, Zhenyu</au><au>Tiwari, Prashant</au><au>Hansen, John H. L.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study</atitle><jtitle>IEEE transactions on intelligent vehicles</jtitle><stitle>TIV</stitle><date>2022-06-01</date><risdate>2022</risdate><volume>7</volume><issue>2</issue><spage>210</spage><epage>220</epage><pages>210-220</pages><issn>2379-8858</issn><eissn>2379-8904</eissn><coden>ITIVBL</coden><abstract>With the rapid development of intelligent vehicles and Advanced Driver-Assistance Systems (ADAS), a new trend is that mixed levels of human driver engagements will be involved in the transportation system. Therefore, necessary visual guidance for drivers is vitally important under this situation to prevent potential risks. To advance the development of visual guidance systems, we introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions. Target vehicle bounding box is drawn and matched with the help of the object detector (running on the ego-vehicle) and position information (received from the cloud). The best matching result, a 79.2% accuracy under 0.7 intersection over union threshold, is obtained with depth images served as an additional feature source. A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology. In the case study, a multi-layer perceptron algorithm is proposed with modified lane change prediction approaches. Human-in-the-loop simulation results obtained from the Unity game engine reveal that the proposed model can improve highway driving performance significantly in terms of safety, comfort, and environmental sustainability.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TIV.2021.3103695</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-2702-7150</orcidid><orcidid>https://orcid.org/0000-0001-8291-5025</orcidid><orcidid>https://orcid.org/0000-0001-6677-5122</orcidid><orcidid>https://orcid.org/0000-0003-1382-9929</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2379-8858
ispartof IEEE transactions on intelligent vehicles, 2022-06, Vol.7 (2), p.210-220
issn 2379-8858
2379-8904
language eng
recordid cdi_crossref_primary_10_1109_TIV_2021_3103695
source IEEE Electronic Library (IEL)
subjects ADAS
Advanced driver assistance systems
Algorithms
Cameras
Case studies
Cloud computing
computer vision
data fusion
Data integration
Digital imaging
Digital twin
Guidance systems
Intelligent vehicles
lane change
Lane changing
Multilayer perceptrons
Multilayers
Object detection
Prediction algorithms
Transportation systems
Vehicles
Vision
title Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T07%3A08%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Vision-Cloud%20Data%20Fusion%20for%20ADAS:%20A%20Lane%20Change%20Prediction%20Case%20Study&rft.jtitle=IEEE%20transactions%20on%20intelligent%20vehicles&rft.au=Liu,%20Yongkang&rft.date=2022-06-01&rft.volume=7&rft.issue=2&rft.spage=210&rft.epage=220&rft.pages=210-220&rft.issn=2379-8858&rft.eissn=2379-8904&rft.coden=ITIVBL&rft_id=info:doi/10.1109/TIV.2021.3103695&rft_dat=%3Cproquest_RIE%3E2688684588%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2688684588&rft_id=info:pmid/&rft_ieee_id=9511277&rfr_iscdi=true