Flexible Vision
This article presented a mobile agent-based distributed vision fusion architecture that provides a flexible vision fusion solution to increase power efficiency by reducing excessive communication and enhance sensor fusion capabilities with migratory in situ on demand algorithms for vision data proce...
Gespeichert in:
Veröffentlicht in: | IEEE robotics & automation magazine 2010-09, Vol.17 (3), p.66-77 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 77 |
---|---|
container_issue | 3 |
container_start_page | 66 |
container_title | IEEE robotics & automation magazine |
container_volume | 17 |
creator | Nestinger, Stephen S Cheng, Harry H |
description | This article presented a mobile agent-based distributed vision fusion architecture that provides a flexible vision fusion solution to increase power efficiency by reducing excessive communication and enhance sensor fusion capabilities with migratory in situ on demand algorithms for vision data processing and analysis. The IEEE FIPA standard-compliant mobile agent system, Mobile-C, implemented as a C library, is used as the foundation for the mobile agent-based distributed vision fusion architecture. Mobile agents dynamically migrate from one sensor node to another to fully combine all necessary sensor data in a desired manner specific to the system requesting the data. Dispatching mobile agents to target vision systems on the network is done on demand, reducing network congestion and the required communication bandwidth. The use of mobile agents in a distributed vision system allows for the encapsulation of specific fusion techniques. The differences between monolithic and mobile agent-based approaches along with future considerations were discussed. The validity of the architecture was proven through two separate case studies. The first case study involves the localization of a part in a real experimental setup with a retrofitted robotic workcell composed of a Puma 560, IBM 7575, conveyor system, and vision system. The second case study vertically and horizontally integrates multiple systems as a tier-scalable planetary reconnaissance experimental system involving two vision systems: a Puma 560 manipulator and a K-Team Khepera III mobile robot. All source code including Mobile-C, the mobile agents, and the mobile agent code presented in the article are available at the project Web site. |
doi_str_mv | 10.1109/MRA.2010.937857 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_1027236556</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5569023</ieee_id><sourcerecordid>2717301421</sourcerecordid><originalsourceid>FETCH-LOGICAL-c236t-bd73d01960a151d865de6a7bb042bf4f38a79e83aff3146ad0660c0ecdac4a6b3</originalsourceid><addsrcrecordid>eNpdkE1Lw0AQhhdRsFbx6MGL4MFT2tmd7NexlFaFiiAq3pZNMoGUNKm7Lei_d0vEg6eZged9GR7GrjhMOAc7fXqZTQSky6I2Uh-xEZfSZELgx3HaQUNmLYpTdhbjGoDnBs2IXS5b-mqKlm7em9j03Tk7qX0b6eJ3jtnbcvE6f8hWz_eP89kqKwWqXVZUGivgVoHnkldGyYqU10UBuSjqvEbjtSWDvq6R58pXoBSUQGXly9yrAsfsbujdhv5zT3HnNk0sqW19R_0-OsONQZ0DJPL2H7nu96FLzzkOQqd3pFSJmg5UGfoYA9VuG5qND98Jcgc_LvlxBz9u8JMS10OiIaI_OnVZEIg_TRJeYg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1027236556</pqid></control><display><type>article</type><title>Flexible Vision</title><source>IEEE Electronic Library (IEL)</source><creator>Nestinger, Stephen S ; Cheng, Harry H</creator><creatorcontrib>Nestinger, Stephen S ; Cheng, Harry H</creatorcontrib><description>This article presented a mobile agent-based distributed vision fusion architecture that provides a flexible vision fusion solution to increase power efficiency by reducing excessive communication and enhance sensor fusion capabilities with migratory in situ on demand algorithms for vision data processing and analysis. The IEEE FIPA standard-compliant mobile agent system, Mobile-C, implemented as a C library, is used as the foundation for the mobile agent-based distributed vision fusion architecture. Mobile agents dynamically migrate from one sensor node to another to fully combine all necessary sensor data in a desired manner specific to the system requesting the data. Dispatching mobile agents to target vision systems on the network is done on demand, reducing network congestion and the required communication bandwidth. The use of mobile agents in a distributed vision system allows for the encapsulation of specific fusion techniques. The differences between monolithic and mobile agent-based approaches along with future considerations were discussed. The validity of the architecture was proven through two separate case studies. The first case study involves the localization of a part in a real experimental setup with a retrofitted robotic workcell composed of a Puma 560, IBM 7575, conveyor system, and vision system. The second case study vertically and horizontally integrates multiple systems as a tier-scalable planetary reconnaissance experimental system involving two vision systems: a Puma 560 manipulator and a K-Team Khepera III mobile robot. All source code including Mobile-C, the mobile agents, and the mobile agent code presented in the article are available at the project Web site.</description><identifier>ISSN: 1070-9932</identifier><identifier>EISSN: 1558-223X</identifier><identifier>DOI: 10.1109/MRA.2010.937857</identifier><identifier>CODEN: IRAMEB</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Automation ; Cameras ; Inventory controls ; Machine vision ; Manufacturing engineering ; Mobile agents ; Mobile communication ; Networks ; Robot sensing systems ; Robots ; Sensors ; Vision ; Vision systems ; Visualization</subject><ispartof>IEEE robotics & automation magazine, 2010-09, Vol.17 (3), p.66-77</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Sep 2010</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c236t-bd73d01960a151d865de6a7bb042bf4f38a79e83aff3146ad0660c0ecdac4a6b3</citedby><cites>FETCH-LOGICAL-c236t-bd73d01960a151d865de6a7bb042bf4f38a79e83aff3146ad0660c0ecdac4a6b3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5569023$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5569023$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Nestinger, Stephen S</creatorcontrib><creatorcontrib>Cheng, Harry H</creatorcontrib><title>Flexible Vision</title><title>IEEE robotics & automation magazine</title><addtitle>MRA</addtitle><description>This article presented a mobile agent-based distributed vision fusion architecture that provides a flexible vision fusion solution to increase power efficiency by reducing excessive communication and enhance sensor fusion capabilities with migratory in situ on demand algorithms for vision data processing and analysis. The IEEE FIPA standard-compliant mobile agent system, Mobile-C, implemented as a C library, is used as the foundation for the mobile agent-based distributed vision fusion architecture. Mobile agents dynamically migrate from one sensor node to another to fully combine all necessary sensor data in a desired manner specific to the system requesting the data. Dispatching mobile agents to target vision systems on the network is done on demand, reducing network congestion and the required communication bandwidth. The use of mobile agents in a distributed vision system allows for the encapsulation of specific fusion techniques. The differences between monolithic and mobile agent-based approaches along with future considerations were discussed. The validity of the architecture was proven through two separate case studies. The first case study involves the localization of a part in a real experimental setup with a retrofitted robotic workcell composed of a Puma 560, IBM 7575, conveyor system, and vision system. The second case study vertically and horizontally integrates multiple systems as a tier-scalable planetary reconnaissance experimental system involving two vision systems: a Puma 560 manipulator and a K-Team Khepera III mobile robot. All source code including Mobile-C, the mobile agents, and the mobile agent code presented in the article are available at the project Web site.</description><subject>Automation</subject><subject>Cameras</subject><subject>Inventory controls</subject><subject>Machine vision</subject><subject>Manufacturing engineering</subject><subject>Mobile agents</subject><subject>Mobile communication</subject><subject>Networks</subject><subject>Robot sensing systems</subject><subject>Robots</subject><subject>Sensors</subject><subject>Vision</subject><subject>Vision systems</subject><subject>Visualization</subject><issn>1070-9932</issn><issn>1558-223X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2010</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1Lw0AQhhdRsFbx6MGL4MFT2tmd7NexlFaFiiAq3pZNMoGUNKm7Lei_d0vEg6eZged9GR7GrjhMOAc7fXqZTQSky6I2Uh-xEZfSZELgx3HaQUNmLYpTdhbjGoDnBs2IXS5b-mqKlm7em9j03Tk7qX0b6eJ3jtnbcvE6f8hWz_eP89kqKwWqXVZUGivgVoHnkldGyYqU10UBuSjqvEbjtSWDvq6R58pXoBSUQGXly9yrAsfsbujdhv5zT3HnNk0sqW19R_0-OsONQZ0DJPL2H7nu96FLzzkOQqd3pFSJmg5UGfoYA9VuG5qND98Jcgc_LvlxBz9u8JMS10OiIaI_OnVZEIg_TRJeYg</recordid><startdate>201009</startdate><enddate>201009</enddate><creator>Nestinger, Stephen S</creator><creator>Cheng, Harry H</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>201009</creationdate><title>Flexible Vision</title><author>Nestinger, Stephen S ; Cheng, Harry H</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c236t-bd73d01960a151d865de6a7bb042bf4f38a79e83aff3146ad0660c0ecdac4a6b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Automation</topic><topic>Cameras</topic><topic>Inventory controls</topic><topic>Machine vision</topic><topic>Manufacturing engineering</topic><topic>Mobile agents</topic><topic>Mobile communication</topic><topic>Networks</topic><topic>Robot sensing systems</topic><topic>Robots</topic><topic>Sensors</topic><topic>Vision</topic><topic>Vision systems</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nestinger, Stephen S</creatorcontrib><creatorcontrib>Cheng, Harry H</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics & automation magazine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nestinger, Stephen S</au><au>Cheng, Harry H</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Flexible Vision</atitle><jtitle>IEEE robotics & automation magazine</jtitle><stitle>MRA</stitle><date>2010-09</date><risdate>2010</risdate><volume>17</volume><issue>3</issue><spage>66</spage><epage>77</epage><pages>66-77</pages><issn>1070-9932</issn><eissn>1558-223X</eissn><coden>IRAMEB</coden><abstract>This article presented a mobile agent-based distributed vision fusion architecture that provides a flexible vision fusion solution to increase power efficiency by reducing excessive communication and enhance sensor fusion capabilities with migratory in situ on demand algorithms for vision data processing and analysis. The IEEE FIPA standard-compliant mobile agent system, Mobile-C, implemented as a C library, is used as the foundation for the mobile agent-based distributed vision fusion architecture. Mobile agents dynamically migrate from one sensor node to another to fully combine all necessary sensor data in a desired manner specific to the system requesting the data. Dispatching mobile agents to target vision systems on the network is done on demand, reducing network congestion and the required communication bandwidth. The use of mobile agents in a distributed vision system allows for the encapsulation of specific fusion techniques. The differences between monolithic and mobile agent-based approaches along with future considerations were discussed. The validity of the architecture was proven through two separate case studies. The first case study involves the localization of a part in a real experimental setup with a retrofitted robotic workcell composed of a Puma 560, IBM 7575, conveyor system, and vision system. The second case study vertically and horizontally integrates multiple systems as a tier-scalable planetary reconnaissance experimental system involving two vision systems: a Puma 560 manipulator and a K-Team Khepera III mobile robot. All source code including Mobile-C, the mobile agents, and the mobile agent code presented in the article are available at the project Web site.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/MRA.2010.937857</doi><tpages>12</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1070-9932 |
ispartof | IEEE robotics & automation magazine, 2010-09, Vol.17 (3), p.66-77 |
issn | 1070-9932 1558-223X |
language | eng |
recordid | cdi_proquest_journals_1027236556 |
source | IEEE Electronic Library (IEL) |
subjects | Automation Cameras Inventory controls Machine vision Manufacturing engineering Mobile agents Mobile communication Networks Robot sensing systems Robots Sensors Vision Vision systems Visualization |
title | Flexible Vision |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T09%3A12%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Flexible%20Vision&rft.jtitle=IEEE%20robotics%20&%20automation%20magazine&rft.au=Nestinger,%20Stephen%20S&rft.date=2010-09&rft.volume=17&rft.issue=3&rft.spage=66&rft.epage=77&rft.pages=66-77&rft.issn=1070-9932&rft.eissn=1558-223X&rft.coden=IRAMEB&rft_id=info:doi/10.1109/MRA.2010.937857&rft_dat=%3Cproquest_RIE%3E2717301421%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1027236556&rft_id=info:pmid/&rft_ieee_id=5569023&rfr_iscdi=true |