System-level design space identification for Many-Core Vision Processors
The current main trends in the embedded systems area, the Cyber-Physical Systems (CPS) and the Internet-of-Things (IoT), are leveraging the development of complex, distributed, low-power, and high-performance embedded systems. An important feature needed in this new Era is the embedded intelligence...
Gespeichert in:
Veröffentlicht in: | Microprocessors and microsystems 2017-07, Vol.52, p.2-22 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 22 |
---|---|
container_issue | |
container_start_page | 2 |
container_title | Microprocessors and microsystems |
container_volume | 52 |
creator | Yudi, Jones Humberto Llanos, Carlos Huebner, Michael |
description | The current main trends in the embedded systems area, the Cyber-Physical Systems (CPS) and the Internet-of-Things (IoT), are leveraging the development of complex, distributed, low-power, and high-performance embedded systems. An important feature needed in this new Era is the embedded intelligence enabling to locally process data and actuate over the environment, without the need of a remote central processing server. In this context, emerged the Smart Cameras: devices able to acquire images and apply sophisticated algorithms for different Image Processing and Computer Vision (IP/CV) applications. Both the technology convergence and the evolution of embedded systems to multi/many-core architectures allow envisioning future cameras as many-core systems able to efficiently explore the natural IP/CV parallelism to meet embedded application’s constraints, e.g. real-time, power consumption, silicon area, temperature management, fault tolerance, among others. In this work, we show the development of a Many-Core Vision Processor architecture, suitable for future Smart Cameras. In our design methodology, we analyze several aspects involved, from high-level application analysis down to fine-grained operations and physical aspects (e.g. geometry and spatial distribution). The main analysis is performed using a SystemC/TLM2.0 simulator specially developed for this project. Silicon Area, Power Consumption and Timing estimations are also provided as results of an early Design-Space Exploration (DSE). Using these results we propose a first complete architecture, which is implemented in an FPGA. Details about the hardware implementation are provided, as well as synthesis results. In comparison to other works, from the literature, the implemented architecture shows the potential of the project developed in this work. |
doi_str_mv | 10.1016/j.micpro.2017.05.013 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_1949646478</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0141933117300571</els_id><sourcerecordid>1949646478</sourcerecordid><originalsourceid>FETCH-LOGICAL-c334t-a80de3fb8f31e4a24dcff040d8ba96dd70bd0237b9e7703aef4ebef66d1495513</originalsourceid><addsrcrecordid>eNp9kF9LwzAUxYMoOKffwIeCz6k3TZq2L4IMdcJEwT-vIU1uJGVrZtIN9u3tqM8-XTiccy7nR8g1g5wBk7ddvvFmG0NeAKtyKHNg_ITMWF0VtBFcnpIZMMFowzk7JxcpdQBQgixmZPl-SANu6Br3uM4sJv_dZ2mrDWbeYj94540efOgzF2L2ovsDXYSI2ZdPR_EtBoMphZguyZnT64RXf3dOPh8fPhZLunp9el7cr6jhXAxU12CRu7Z2nKHQhbDGORBg61Y30toKWgsFr9oGqwq4RiewRSelZaIpS8bn5GbqHff-7DANqgu72I8vFWtEI4UUVT26xOQyMaQU0alt9BsdD4qBOjJTnZqYqSMzBaUamY2xuymG44K9x6iS8dgbtD6iGZQN_v-CX7r-d-o</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1949646478</pqid></control><display><type>article</type><title>System-level design space identification for Many-Core Vision Processors</title><source>Elsevier ScienceDirect Journals</source><creator>Yudi, Jones ; Humberto Llanos, Carlos ; Huebner, Michael</creator><creatorcontrib>Yudi, Jones ; Humberto Llanos, Carlos ; Huebner, Michael</creatorcontrib><description>The current main trends in the embedded systems area, the Cyber-Physical Systems (CPS) and the Internet-of-Things (IoT), are leveraging the development of complex, distributed, low-power, and high-performance embedded systems. An important feature needed in this new Era is the embedded intelligence enabling to locally process data and actuate over the environment, without the need of a remote central processing server. In this context, emerged the Smart Cameras: devices able to acquire images and apply sophisticated algorithms for different Image Processing and Computer Vision (IP/CV) applications. Both the technology convergence and the evolution of embedded systems to multi/many-core architectures allow envisioning future cameras as many-core systems able to efficiently explore the natural IP/CV parallelism to meet embedded application’s constraints, e.g. real-time, power consumption, silicon area, temperature management, fault tolerance, among others. In this work, we show the development of a Many-Core Vision Processor architecture, suitable for future Smart Cameras. In our design methodology, we analyze several aspects involved, from high-level application analysis down to fine-grained operations and physical aspects (e.g. geometry and spatial distribution). The main analysis is performed using a SystemC/TLM2.0 simulator specially developed for this project. Silicon Area, Power Consumption and Timing estimations are also provided as results of an early Design-Space Exploration (DSE). Using these results we propose a first complete architecture, which is implemented in an FPGA. Details about the hardware implementation are provided, as well as synthesis results. In comparison to other works, from the literature, the implemented architecture shows the potential of the project developed in this work.</description><identifier>ISSN: 0141-9331</identifier><identifier>EISSN: 1872-9436</identifier><identifier>DOI: 10.1016/j.micpro.2017.05.013</identifier><language>eng</language><publisher>Kidlington: Elsevier B.V</publisher><subject>Cameras ; Computer simulation ; Computer vision ; Cyber-physical systems ; Electric power distribution ; Embedded systems ; Fault tolerance ; Image acquisition ; Image processing ; Internet of Things ; Microprocessors ; Power consumption ; Processors ; Silicon ; Spatial distribution</subject><ispartof>Microprocessors and microsystems, 2017-07, Vol.52, p.2-22</ispartof><rights>2017 Elsevier B.V.</rights><rights>Copyright Elsevier BV Jul 2017</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c334t-a80de3fb8f31e4a24dcff040d8ba96dd70bd0237b9e7703aef4ebef66d1495513</citedby><cites>FETCH-LOGICAL-c334t-a80de3fb8f31e4a24dcff040d8ba96dd70bd0237b9e7703aef4ebef66d1495513</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0141933117300571$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids></links><search><creatorcontrib>Yudi, Jones</creatorcontrib><creatorcontrib>Humberto Llanos, Carlos</creatorcontrib><creatorcontrib>Huebner, Michael</creatorcontrib><title>System-level design space identification for Many-Core Vision Processors</title><title>Microprocessors and microsystems</title><description>The current main trends in the embedded systems area, the Cyber-Physical Systems (CPS) and the Internet-of-Things (IoT), are leveraging the development of complex, distributed, low-power, and high-performance embedded systems. An important feature needed in this new Era is the embedded intelligence enabling to locally process data and actuate over the environment, without the need of a remote central processing server. In this context, emerged the Smart Cameras: devices able to acquire images and apply sophisticated algorithms for different Image Processing and Computer Vision (IP/CV) applications. Both the technology convergence and the evolution of embedded systems to multi/many-core architectures allow envisioning future cameras as many-core systems able to efficiently explore the natural IP/CV parallelism to meet embedded application’s constraints, e.g. real-time, power consumption, silicon area, temperature management, fault tolerance, among others. In this work, we show the development of a Many-Core Vision Processor architecture, suitable for future Smart Cameras. In our design methodology, we analyze several aspects involved, from high-level application analysis down to fine-grained operations and physical aspects (e.g. geometry and spatial distribution). The main analysis is performed using a SystemC/TLM2.0 simulator specially developed for this project. Silicon Area, Power Consumption and Timing estimations are also provided as results of an early Design-Space Exploration (DSE). Using these results we propose a first complete architecture, which is implemented in an FPGA. Details about the hardware implementation are provided, as well as synthesis results. In comparison to other works, from the literature, the implemented architecture shows the potential of the project developed in this work.</description><subject>Cameras</subject><subject>Computer simulation</subject><subject>Computer vision</subject><subject>Cyber-physical systems</subject><subject>Electric power distribution</subject><subject>Embedded systems</subject><subject>Fault tolerance</subject><subject>Image acquisition</subject><subject>Image processing</subject><subject>Internet of Things</subject><subject>Microprocessors</subject><subject>Power consumption</subject><subject>Processors</subject><subject>Silicon</subject><subject>Spatial distribution</subject><issn>0141-9331</issn><issn>1872-9436</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><recordid>eNp9kF9LwzAUxYMoOKffwIeCz6k3TZq2L4IMdcJEwT-vIU1uJGVrZtIN9u3tqM8-XTiccy7nR8g1g5wBk7ddvvFmG0NeAKtyKHNg_ITMWF0VtBFcnpIZMMFowzk7JxcpdQBQgixmZPl-SANu6Br3uM4sJv_dZ2mrDWbeYj94540efOgzF2L2ovsDXYSI2ZdPR_EtBoMphZguyZnT64RXf3dOPh8fPhZLunp9el7cr6jhXAxU12CRu7Z2nKHQhbDGORBg61Y30toKWgsFr9oGqwq4RiewRSelZaIpS8bn5GbqHff-7DANqgu72I8vFWtEI4UUVT26xOQyMaQU0alt9BsdD4qBOjJTnZqYqSMzBaUamY2xuymG44K9x6iS8dgbtD6iGZQN_v-CX7r-d-o</recordid><startdate>201707</startdate><enddate>201707</enddate><creator>Yudi, Jones</creator><creator>Humberto Llanos, Carlos</creator><creator>Huebner, Michael</creator><general>Elsevier B.V</general><general>Elsevier BV</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>201707</creationdate><title>System-level design space identification for Many-Core Vision Processors</title><author>Yudi, Jones ; Humberto Llanos, Carlos ; Huebner, Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c334t-a80de3fb8f31e4a24dcff040d8ba96dd70bd0237b9e7703aef4ebef66d1495513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Cameras</topic><topic>Computer simulation</topic><topic>Computer vision</topic><topic>Cyber-physical systems</topic><topic>Electric power distribution</topic><topic>Embedded systems</topic><topic>Fault tolerance</topic><topic>Image acquisition</topic><topic>Image processing</topic><topic>Internet of Things</topic><topic>Microprocessors</topic><topic>Power consumption</topic><topic>Processors</topic><topic>Silicon</topic><topic>Spatial distribution</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yudi, Jones</creatorcontrib><creatorcontrib>Humberto Llanos, Carlos</creatorcontrib><creatorcontrib>Huebner, Michael</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Microprocessors and microsystems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yudi, Jones</au><au>Humberto Llanos, Carlos</au><au>Huebner, Michael</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>System-level design space identification for Many-Core Vision Processors</atitle><jtitle>Microprocessors and microsystems</jtitle><date>2017-07</date><risdate>2017</risdate><volume>52</volume><spage>2</spage><epage>22</epage><pages>2-22</pages><issn>0141-9331</issn><eissn>1872-9436</eissn><abstract>The current main trends in the embedded systems area, the Cyber-Physical Systems (CPS) and the Internet-of-Things (IoT), are leveraging the development of complex, distributed, low-power, and high-performance embedded systems. An important feature needed in this new Era is the embedded intelligence enabling to locally process data and actuate over the environment, without the need of a remote central processing server. In this context, emerged the Smart Cameras: devices able to acquire images and apply sophisticated algorithms for different Image Processing and Computer Vision (IP/CV) applications. Both the technology convergence and the evolution of embedded systems to multi/many-core architectures allow envisioning future cameras as many-core systems able to efficiently explore the natural IP/CV parallelism to meet embedded application’s constraints, e.g. real-time, power consumption, silicon area, temperature management, fault tolerance, among others. In this work, we show the development of a Many-Core Vision Processor architecture, suitable for future Smart Cameras. In our design methodology, we analyze several aspects involved, from high-level application analysis down to fine-grained operations and physical aspects (e.g. geometry and spatial distribution). The main analysis is performed using a SystemC/TLM2.0 simulator specially developed for this project. Silicon Area, Power Consumption and Timing estimations are also provided as results of an early Design-Space Exploration (DSE). Using these results we propose a first complete architecture, which is implemented in an FPGA. Details about the hardware implementation are provided, as well as synthesis results. In comparison to other works, from the literature, the implemented architecture shows the potential of the project developed in this work.</abstract><cop>Kidlington</cop><pub>Elsevier B.V</pub><doi>10.1016/j.micpro.2017.05.013</doi><tpages>21</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0141-9331 |
ispartof | Microprocessors and microsystems, 2017-07, Vol.52, p.2-22 |
issn | 0141-9331 1872-9436 |
language | eng |
recordid | cdi_proquest_journals_1949646478 |
source | Elsevier ScienceDirect Journals |
subjects | Cameras Computer simulation Computer vision Cyber-physical systems Electric power distribution Embedded systems Fault tolerance Image acquisition Image processing Internet of Things Microprocessors Power consumption Processors Silicon Spatial distribution |
title | System-level design space identification for Many-Core Vision Processors |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T06%3A21%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=System-level%20design%20space%20identification%20for%20Many-Core%20Vision%20Processors&rft.jtitle=Microprocessors%20and%20microsystems&rft.au=Yudi,%20Jones&rft.date=2017-07&rft.volume=52&rft.spage=2&rft.epage=22&rft.pages=2-22&rft.issn=0141-9331&rft.eissn=1872-9436&rft_id=info:doi/10.1016/j.micpro.2017.05.013&rft_dat=%3Cproquest_cross%3E1949646478%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1949646478&rft_id=info:pmid/&rft_els_id=S0141933117300571&rfr_iscdi=true |