Depth-based target segmentation for intelligent vehicles: fusion of radar and binocular stereo

Dynamic environment interpretation is of special interest for intelligent vehicle systems. It is expected to provide lane information, target depth, and the image positions of targets within given depth ranges. Typical segmentation algorithms cannot solve the problems satisfactorily, especially unde...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2002-09, Vol.3 (3), p.196-202
Hauptverfasser: Fang, Y., Masaki, I., Horn, B.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 202
container_issue 3
container_start_page 196
container_title IEEE transactions on intelligent transportation systems
container_volume 3
creator Fang, Y.
Masaki, I.
Horn, B.
description Dynamic environment interpretation is of special interest for intelligent vehicle systems. It is expected to provide lane information, target depth, and the image positions of targets within given depth ranges. Typical segmentation algorithms cannot solve the problems satisfactorily, especially under the high-speed requirements of a real-time environment. Furthermore, the variation of image positions and sizes of targets creates difficulties for tracking. In this paper, we propose a sensor-fusion method that can make use of coarse target depth information to segment target locations in video images. Coarse depth ranges can be provided by radar systems or by a vision-based algorithm introduced in the paper. The new segmentation method offers more accuracy and robustness while decreasing the computational load.
doi_str_mv 10.1109/TITS.2002.802926
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_28868568</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>1033763</ieee_id><sourcerecordid>907993362</sourcerecordid><originalsourceid>FETCH-LOGICAL-c382t-e84d1aeaf8b1dfac8adf74d311405785b17e35b2e169fdcee8cbfc20c2a2e8d33</originalsourceid><addsrcrecordid>eNqN0UtLxDAQB_AiCj7vgpfgQU9d82ja1JusT1jw4Ho1pMlkrXSbNUkFv70p9SAexFOS4TfDhH-WHRM8IwTXF8uH5dOMYkxnAtOallvZHuFc5BiTcnu80yKvMce72X4Ib6lacEL2spdr2MTXvFEBDIrKryCiAKs19FHF1vXIOo_aPkLXtatURB_w2uoOwiWyQxiBs8grozxSvUFN2zs9dOkVInhwh9mOVV2Ao-_zIHu-vVnO7_PF493D_GqRayZozEEUhihQVjTEWKWFMrYqDCOkwLwSvCEVMN5QIGVtjQYQurGaYk0VBWEYO8jOp7kb794HCFGu26DT0qoHNwRZ46quGStpkmd_SipEKXgp_gFpYnScePoLvrnB9-m7UoiiYIzTOiE8Ie1dCB6s3Ph2rfynJFiOAcoxQDkGKKcAU8vJ1NICwA_OWFUy9gX5I5hI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>884433529</pqid></control><display><type>article</type><title>Depth-based target segmentation for intelligent vehicles: fusion of radar and binocular stereo</title><source>IEEE Electronic Library (IEL)</source><creator>Fang, Y. ; Masaki, I. ; Horn, B.</creator><creatorcontrib>Fang, Y. ; Masaki, I. ; Horn, B.</creatorcontrib><description>Dynamic environment interpretation is of special interest for intelligent vehicle systems. It is expected to provide lane information, target depth, and the image positions of targets within given depth ranges. Typical segmentation algorithms cannot solve the problems satisfactorily, especially under the high-speed requirements of a real-time environment. Furthermore, the variation of image positions and sizes of targets creates difficulties for tracking. In this paper, we propose a sensor-fusion method that can make use of coarse target depth information to segment target locations in video images. Coarse depth ranges can be provided by radar systems or by a vision-based algorithm introduced in the paper. The new segmentation method offers more accuracy and robustness while decreasing the computational load.</description><identifier>ISSN: 1524-9050</identifier><identifier>EISSN: 1558-0016</identifier><identifier>DOI: 10.1109/TITS.2002.802926</identifier><identifier>CODEN: ITISFG</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Dynamical systems ; Dynamics ; Image segmentation ; Intelligent sensors ; Intelligent vehicles ; Machine vision ; Motion detection ; Object detection ; Radar ; Radar detection ; Radar imaging ; Robustness ; Segmentation ; Spatial resolution</subject><ispartof>IEEE transactions on intelligent transportation systems, 2002-09, Vol.3 (3), p.196-202</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2002</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c382t-e84d1aeaf8b1dfac8adf74d311405785b17e35b2e169fdcee8cbfc20c2a2e8d33</citedby><cites>FETCH-LOGICAL-c382t-e84d1aeaf8b1dfac8adf74d311405785b17e35b2e169fdcee8cbfc20c2a2e8d33</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/1033763$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/1033763$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Fang, Y.</creatorcontrib><creatorcontrib>Masaki, I.</creatorcontrib><creatorcontrib>Horn, B.</creatorcontrib><title>Depth-based target segmentation for intelligent vehicles: fusion of radar and binocular stereo</title><title>IEEE transactions on intelligent transportation systems</title><addtitle>TITS</addtitle><description>Dynamic environment interpretation is of special interest for intelligent vehicle systems. It is expected to provide lane information, target depth, and the image positions of targets within given depth ranges. Typical segmentation algorithms cannot solve the problems satisfactorily, especially under the high-speed requirements of a real-time environment. Furthermore, the variation of image positions and sizes of targets creates difficulties for tracking. In this paper, we propose a sensor-fusion method that can make use of coarse target depth information to segment target locations in video images. Coarse depth ranges can be provided by radar systems or by a vision-based algorithm introduced in the paper. The new segmentation method offers more accuracy and robustness while decreasing the computational load.</description><subject>Algorithms</subject><subject>Dynamical systems</subject><subject>Dynamics</subject><subject>Image segmentation</subject><subject>Intelligent sensors</subject><subject>Intelligent vehicles</subject><subject>Machine vision</subject><subject>Motion detection</subject><subject>Object detection</subject><subject>Radar</subject><subject>Radar detection</subject><subject>Radar imaging</subject><subject>Robustness</subject><subject>Segmentation</subject><subject>Spatial resolution</subject><issn>1524-9050</issn><issn>1558-0016</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2002</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNqN0UtLxDAQB_AiCj7vgpfgQU9d82ja1JusT1jw4Ho1pMlkrXSbNUkFv70p9SAexFOS4TfDhH-WHRM8IwTXF8uH5dOMYkxnAtOallvZHuFc5BiTcnu80yKvMce72X4Ib6lacEL2spdr2MTXvFEBDIrKryCiAKs19FHF1vXIOo_aPkLXtatURB_w2uoOwiWyQxiBs8grozxSvUFN2zs9dOkVInhwh9mOVV2Ao-_zIHu-vVnO7_PF493D_GqRayZozEEUhihQVjTEWKWFMrYqDCOkwLwSvCEVMN5QIGVtjQYQurGaYk0VBWEYO8jOp7kb794HCFGu26DT0qoHNwRZ46quGStpkmd_SipEKXgp_gFpYnScePoLvrnB9-m7UoiiYIzTOiE8Ie1dCB6s3Ph2rfynJFiOAcoxQDkGKKcAU8vJ1NICwA_OWFUy9gX5I5hI</recordid><startdate>20020901</startdate><enddate>20020901</enddate><creator>Fang, Y.</creator><creator>Masaki, I.</creator><creator>Horn, B.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7TB</scope><scope>H8D</scope><scope>F28</scope></search><sort><creationdate>20020901</creationdate><title>Depth-based target segmentation for intelligent vehicles: fusion of radar and binocular stereo</title><author>Fang, Y. ; Masaki, I. ; Horn, B.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c382t-e84d1aeaf8b1dfac8adf74d311405785b17e35b2e169fdcee8cbfc20c2a2e8d33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2002</creationdate><topic>Algorithms</topic><topic>Dynamical systems</topic><topic>Dynamics</topic><topic>Image segmentation</topic><topic>Intelligent sensors</topic><topic>Intelligent vehicles</topic><topic>Machine vision</topic><topic>Motion detection</topic><topic>Object detection</topic><topic>Radar</topic><topic>Radar detection</topic><topic>Radar imaging</topic><topic>Robustness</topic><topic>Segmentation</topic><topic>Spatial resolution</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fang, Y.</creatorcontrib><creatorcontrib>Masaki, I.</creatorcontrib><creatorcontrib>Horn, B.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Aerospace Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><jtitle>IEEE transactions on intelligent transportation systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fang, Y.</au><au>Masaki, I.</au><au>Horn, B.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Depth-based target segmentation for intelligent vehicles: fusion of radar and binocular stereo</atitle><jtitle>IEEE transactions on intelligent transportation systems</jtitle><stitle>TITS</stitle><date>2002-09-01</date><risdate>2002</risdate><volume>3</volume><issue>3</issue><spage>196</spage><epage>202</epage><pages>196-202</pages><issn>1524-9050</issn><eissn>1558-0016</eissn><coden>ITISFG</coden><abstract>Dynamic environment interpretation is of special interest for intelligent vehicle systems. It is expected to provide lane information, target depth, and the image positions of targets within given depth ranges. Typical segmentation algorithms cannot solve the problems satisfactorily, especially under the high-speed requirements of a real-time environment. Furthermore, the variation of image positions and sizes of targets creates difficulties for tracking. In this paper, we propose a sensor-fusion method that can make use of coarse target depth information to segment target locations in video images. Coarse depth ranges can be provided by radar systems or by a vision-based algorithm introduced in the paper. The new segmentation method offers more accuracy and robustness while decreasing the computational load.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TITS.2002.802926</doi><tpages>7</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1524-9050
ispartof IEEE transactions on intelligent transportation systems, 2002-09, Vol.3 (3), p.196-202
issn 1524-9050
1558-0016
language eng
recordid cdi_proquest_miscellaneous_28868568
source IEEE Electronic Library (IEL)
subjects Algorithms
Dynamical systems
Dynamics
Image segmentation
Intelligent sensors
Intelligent vehicles
Machine vision
Motion detection
Object detection
Radar
Radar detection
Radar imaging
Robustness
Segmentation
Spatial resolution
title Depth-based target segmentation for intelligent vehicles: fusion of radar and binocular stereo
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T12%3A19%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Depth-based%20target%20segmentation%20for%20intelligent%20vehicles:%20fusion%20of%20radar%20and%20binocular%20stereo&rft.jtitle=IEEE%20transactions%20on%20intelligent%20transportation%20systems&rft.au=Fang,%20Y.&rft.date=2002-09-01&rft.volume=3&rft.issue=3&rft.spage=196&rft.epage=202&rft.pages=196-202&rft.issn=1524-9050&rft.eissn=1558-0016&rft.coden=ITISFG&rft_id=info:doi/10.1109/TITS.2002.802926&rft_dat=%3Cproquest_RIE%3E907993362%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=884433529&rft_id=info:pmid/&rft_ieee_id=1033763&rfr_iscdi=true