Research on multi-camera information fusion method for intelligent perception

In this paper, the Gaussian Mixture Model and Mean Shift algorithm are used to detect and track moving objects in the visual perception network composed of multiple cameras. And on this basis, a target matching method based on wavelet transform, which is applied in a visual perception network compos...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2018-06, Vol.77 (12), p.15003-15026
Hauptverfasser: Qi, Feng, Tianjiang, Wang, Fang, Liu, HeFei, Lin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 15026
container_issue 12
container_start_page 15003
container_title Multimedia tools and applications
container_volume 77
creator Qi, Feng
Tianjiang, Wang
Fang, Liu
HeFei, Lin
description In this paper, the Gaussian Mixture Model and Mean Shift algorithm are used to detect and track moving objects in the visual perception network composed of multiple cameras. And on this basis, a target matching method based on wavelet transform, which is applied in a visual perception network composed by multiple camera, fusing visual information from different cameras is proposed. This method takes local features as basis of target matching, and applies wavelet transformation to detect the feature points that represent important information of the target image, and then extracts the color of the neighborhood of feature points as its salient features. The method of classification and clustering is applied by calculating the distance of salient features vector space to measure similarities of the target features and thus realize target recognition. The test result shows that the method can realize the matching and recognition of moving object with the cooperation among multiple cameras.
doi_str_mv 10.1007/s11042-017-5085-z
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2059445366</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2059445366</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-8a0f29fea7c5268303d2ae695d4b840d6e90ea0a9864fc6456c42b20356e229c3</originalsourceid><addsrcrecordid>eNp1kEtLxDAQx4MouK5-AG8Fz9HJs-1RFl-wIoieQzad7HbpY03ag_vpTangydMM838M_Ai5ZnDLAPK7yBhIToHlVEGh6PGELJjKBc1zzk7TLgqguQJ2Ti5i3AMwrbhckNd3jGiD22V9l7VjM9TU2RaDzerO96G1Q50EP8ZptDjs-ipL96QO2DT1FrshO2BweJiMl-TM2ybi1e9cks_Hh4_VM12_Pb2s7tfUCaYHWljwvPRoc6e4LgSIilvUparkppBQaSwBLdiy0NI7LZV2km84CKWR89KJJbmZew-h_xoxDmbfj6FLLw0HVUqphNbJxWaXC32MAb05hLq14dswMBM1M1MziZqZqJljyvA5E5O322L4a_4_9ANL_HBY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2059445366</pqid></control><display><type>article</type><title>Research on multi-camera information fusion method for intelligent perception</title><source>SpringerLink (Online service)</source><creator>Qi, Feng ; Tianjiang, Wang ; Fang, Liu ; HeFei, Lin</creator><creatorcontrib>Qi, Feng ; Tianjiang, Wang ; Fang, Liu ; HeFei, Lin</creatorcontrib><description>In this paper, the Gaussian Mixture Model and Mean Shift algorithm are used to detect and track moving objects in the visual perception network composed of multiple cameras. And on this basis, a target matching method based on wavelet transform, which is applied in a visual perception network composed by multiple camera, fusing visual information from different cameras is proposed. This method takes local features as basis of target matching, and applies wavelet transformation to detect the feature points that represent important information of the target image, and then extracts the color of the neighborhood of feature points as its salient features. The method of classification and clustering is applied by calculating the distance of salient features vector space to measure similarities of the target features and thus realize target recognition. The test result shows that the method can realize the matching and recognition of moving object with the cooperation among multiple cameras.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-017-5085-z</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Cameras ; Clustering ; Computer Communication Networks ; Computer Science ; Data integration ; Data Structures and Information Theory ; Feature extraction ; Feature recognition ; Image detection ; Matching ; Moving object recognition ; Multimedia Information Systems ; Multisensor fusion ; Special Purpose and Application-Based Systems ; Target recognition ; Visual perception ; Visual perception driven algorithms ; Wavelet transforms</subject><ispartof>Multimedia tools and applications, 2018-06, Vol.77 (12), p.15003-15026</ispartof><rights>Springer Science+Business Media, LLC 2017</rights><rights>Multimedia Tools and Applications is a copyright of Springer, (2017). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c316t-8a0f29fea7c5268303d2ae695d4b840d6e90ea0a9864fc6456c42b20356e229c3</citedby><cites>FETCH-LOGICAL-c316t-8a0f29fea7c5268303d2ae695d4b840d6e90ea0a9864fc6456c42b20356e229c3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-017-5085-z$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-017-5085-z$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Qi, Feng</creatorcontrib><creatorcontrib>Tianjiang, Wang</creatorcontrib><creatorcontrib>Fang, Liu</creatorcontrib><creatorcontrib>HeFei, Lin</creatorcontrib><title>Research on multi-camera information fusion method for intelligent perception</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>In this paper, the Gaussian Mixture Model and Mean Shift algorithm are used to detect and track moving objects in the visual perception network composed of multiple cameras. And on this basis, a target matching method based on wavelet transform, which is applied in a visual perception network composed by multiple camera, fusing visual information from different cameras is proposed. This method takes local features as basis of target matching, and applies wavelet transformation to detect the feature points that represent important information of the target image, and then extracts the color of the neighborhood of feature points as its salient features. The method of classification and clustering is applied by calculating the distance of salient features vector space to measure similarities of the target features and thus realize target recognition. The test result shows that the method can realize the matching and recognition of moving object with the cooperation among multiple cameras.</description><subject>Cameras</subject><subject>Clustering</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data integration</subject><subject>Data Structures and Information Theory</subject><subject>Feature extraction</subject><subject>Feature recognition</subject><subject>Image detection</subject><subject>Matching</subject><subject>Moving object recognition</subject><subject>Multimedia Information Systems</subject><subject>Multisensor fusion</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Target recognition</subject><subject>Visual perception</subject><subject>Visual perception driven algorithms</subject><subject>Wavelet transforms</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp1kEtLxDAQx4MouK5-AG8Fz9HJs-1RFl-wIoieQzad7HbpY03ag_vpTangydMM838M_Ai5ZnDLAPK7yBhIToHlVEGh6PGELJjKBc1zzk7TLgqguQJ2Ti5i3AMwrbhckNd3jGiD22V9l7VjM9TU2RaDzerO96G1Q50EP8ZptDjs-ipL96QO2DT1FrshO2BweJiMl-TM2ybi1e9cks_Hh4_VM12_Pb2s7tfUCaYHWljwvPRoc6e4LgSIilvUparkppBQaSwBLdiy0NI7LZV2km84CKWR89KJJbmZew-h_xoxDmbfj6FLLw0HVUqphNbJxWaXC32MAb05hLq14dswMBM1M1MziZqZqJljyvA5E5O322L4a_4_9ANL_HBY</recordid><startdate>20180601</startdate><enddate>20180601</enddate><creator>Qi, Feng</creator><creator>Tianjiang, Wang</creator><creator>Fang, Liu</creator><creator>HeFei, Lin</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope></search><sort><creationdate>20180601</creationdate><title>Research on multi-camera information fusion method for intelligent perception</title><author>Qi, Feng ; Tianjiang, Wang ; Fang, Liu ; HeFei, Lin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-8a0f29fea7c5268303d2ae695d4b840d6e90ea0a9864fc6456c42b20356e229c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Cameras</topic><topic>Clustering</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data integration</topic><topic>Data Structures and Information Theory</topic><topic>Feature extraction</topic><topic>Feature recognition</topic><topic>Image detection</topic><topic>Matching</topic><topic>Moving object recognition</topic><topic>Multimedia Information Systems</topic><topic>Multisensor fusion</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Target recognition</topic><topic>Visual perception</topic><topic>Visual perception driven algorithms</topic><topic>Wavelet transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Qi, Feng</creatorcontrib><creatorcontrib>Tianjiang, Wang</creatorcontrib><creatorcontrib>Fang, Liu</creatorcontrib><creatorcontrib>HeFei, Lin</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM global</collection><collection>Computing Database</collection><collection>ProQuest Research Library</collection><collection>Research Library (Corporate)</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>test</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Qi, Feng</au><au>Tianjiang, Wang</au><au>Fang, Liu</au><au>HeFei, Lin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Research on multi-camera information fusion method for intelligent perception</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2018-06-01</date><risdate>2018</risdate><volume>77</volume><issue>12</issue><spage>15003</spage><epage>15026</epage><pages>15003-15026</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>In this paper, the Gaussian Mixture Model and Mean Shift algorithm are used to detect and track moving objects in the visual perception network composed of multiple cameras. And on this basis, a target matching method based on wavelet transform, which is applied in a visual perception network composed by multiple camera, fusing visual information from different cameras is proposed. This method takes local features as basis of target matching, and applies wavelet transformation to detect the feature points that represent important information of the target image, and then extracts the color of the neighborhood of feature points as its salient features. The method of classification and clustering is applied by calculating the distance of salient features vector space to measure similarities of the target features and thus realize target recognition. The test result shows that the method can realize the matching and recognition of moving object with the cooperation among multiple cameras.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-017-5085-z</doi><tpages>24</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2018-06, Vol.77 (12), p.15003-15026
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_2059445366
source SpringerLink (Online service)
subjects Cameras
Clustering
Computer Communication Networks
Computer Science
Data integration
Data Structures and Information Theory
Feature extraction
Feature recognition
Image detection
Matching
Moving object recognition
Multimedia Information Systems
Multisensor fusion
Special Purpose and Application-Based Systems
Target recognition
Visual perception
Visual perception driven algorithms
Wavelet transforms
title Research on multi-camera information fusion method for intelligent perception
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T09%3A31%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Research%20on%20multi-camera%20information%20fusion%20method%20for%20intelligent%20perception&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Qi,%20Feng&rft.date=2018-06-01&rft.volume=77&rft.issue=12&rft.spage=15003&rft.epage=15026&rft.pages=15003-15026&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-017-5085-z&rft_dat=%3Cproquest_cross%3E2059445366%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2059445366&rft_id=info:pmid/&rfr_iscdi=true