An Integrated FPGA Accelerator for Deep Learning-Based 2D/3D Path Planning

Path planning is a crucial component for realizing the autonomy of mobile robots. However, due to limited computational resources on mobile robots, it remains challenging to deploy state-of-the-art methods and achieve real-time performance. To address this, we propose P3Net (PointNet-based Path Plan...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on computers 2024-06, Vol.73 (6), p.1442-1456
Hauptverfasser: Sugiura, Keisuke, Matsutani, Hiroki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1456
container_issue 6
container_start_page 1442
container_title IEEE transactions on computers
container_volume 73
creator Sugiura, Keisuke
Matsutani, Hiroki
description Path planning is a crucial component for realizing the autonomy of mobile robots. However, due to limited computational resources on mobile robots, it remains challenging to deploy state-of-the-art methods and achieve real-time performance. To address this, we propose P3Net (PointNet-based Path Planning Networks), a lightweight deep-learning-based method for 2D/3D path planning, and design an IP core (P3NetCore) targeting FPGA SoCs (Xilinx ZCU104). P3Net improves the algorithm and model architecture of the recently-proposed MPNet. P3Net employs an encoder with a PointNet backbone and a lightweight planning network in order to extract robust point cloud features and sample path points from a promising region. P3NetCore is comprised of the fully-pipelined point cloud encoder, batched bidirectional path planner, and parallel collision checker, to cover most part of the algorithm. On the 2D (3D) datasets, P3Net with the IP core runs 30.52-186.36x and 7.68-143.62x (15.69-93.26x and 5.30-45.27x) faster than ARM Cortex CPU and Nvidia Jetson while only consuming 0.255W (0.809W), and is up to 1278.14x (455.34x) power-efficient than the workstation. P3Net improves the success rate by up to 28.2% and plans a near-optimal path, leading to a significantly better tradeoff between computation and solution quality than MPNet and the state-of-the-art sampling-based methods.
doi_str_mv 10.1109/TC.2024.3377895
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10474486</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10474486</ieee_id><sourcerecordid>3053296997</sourcerecordid><originalsourceid>FETCH-LOGICAL-c285t-a2fa44e3ec3c51d8c8896abb0ef193ad4cafa066ae6b9feb39e77ade3ddfbe7d3</originalsourceid><addsrcrecordid>eNpNkE1PwkAQhjdGExE9e_HQxHNhv7d7rEUQ00QOeN5Mt7MIwRa35eC_twQOHiaTzDzvTPIQ8sjohDFqp-tiwimXEyGMyay6IiOmlEmtVfqajChlWWqFpLfkrut2lFLNqR2R97xJlk2Pmwg91sl8tciT3Hvc4zBoYxKGmiEekhIhNttmk75AN4B8NhWzZAX9V7LaQ3Pa3JObAPsOHy59TD7nr-viLS0_FssiL1PPM9WnwANIiQK98IrVmc8yq6GqKAZmBdTSQwCqNaCubMBKWDQGahR1HSo0tRiT5_PdQ2x_jtj1btceYzO8dIIqwa221gzU9Ez52HZdxOAOcfsN8dcx6k7C3LpwJ2HuImxIPJ0TW0T8R0sjZabFH-YAZi4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3053296997</pqid></control><display><type>article</type><title>An Integrated FPGA Accelerator for Deep Learning-Based 2D/3D Path Planning</title><source>IEEE Electronic Library (IEL)</source><creator>Sugiura, Keisuke ; Matsutani, Hiroki</creator><creatorcontrib>Sugiura, Keisuke ; Matsutani, Hiroki</creatorcontrib><description>Path planning is a crucial component for realizing the autonomy of mobile robots. However, due to limited computational resources on mobile robots, it remains challenging to deploy state-of-the-art methods and achieve real-time performance. To address this, we propose P3Net (PointNet-based Path Planning Networks), a lightweight deep-learning-based method for 2D/3D path planning, and design an IP core (P3NetCore) targeting FPGA SoCs (Xilinx ZCU104). P3Net improves the algorithm and model architecture of the recently-proposed MPNet. P3Net employs an encoder with a PointNet backbone and a lightweight planning network in order to extract robust point cloud features and sample path points from a promising region. P3NetCore is comprised of the fully-pipelined point cloud encoder, batched bidirectional path planner, and parallel collision checker, to cover most part of the algorithm. On the 2D (3D) datasets, P3Net with the IP core runs 30.52-186.36x and 7.68-143.62x (15.69-93.26x and 5.30-45.27x) faster than ARM Cortex CPU and Nvidia Jetson while only consuming 0.255W (0.809W), and is up to 1278.14x (455.34x) power-efficient than the workstation. P3Net improves the success rate by up to 28.2% and plans a near-optimal path, leading to a significantly better tradeoff between computation and solution quality than MPNet and the state-of-the-art sampling-based methods.</description><identifier>ISSN: 0018-9340</identifier><identifier>EISSN: 1557-9956</identifier><identifier>DOI: 10.1109/TC.2024.3377895</identifier><identifier>CODEN: ITCOB4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Coders ; Deep learning ; Feature extraction ; Field programmable gate arrays ; FPGA ; IP (Internet Protocol) ; Lightweight ; neural path planning ; Parallel processing ; Path planning ; Planning ; Point cloud compression ; point cloud processing ; PointNet ; Robots ; State of the art ; Weight reduction</subject><ispartof>IEEE transactions on computers, 2024-06, Vol.73 (6), p.1442-1456</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c285t-a2fa44e3ec3c51d8c8896abb0ef193ad4cafa066ae6b9feb39e77ade3ddfbe7d3</cites><orcidid>0000-0001-8534-2381 ; 0000-0001-9578-3842</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10474486$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids></links><search><creatorcontrib>Sugiura, Keisuke</creatorcontrib><creatorcontrib>Matsutani, Hiroki</creatorcontrib><title>An Integrated FPGA Accelerator for Deep Learning-Based 2D/3D Path Planning</title><title>IEEE transactions on computers</title><addtitle>TC</addtitle><description>Path planning is a crucial component for realizing the autonomy of mobile robots. However, due to limited computational resources on mobile robots, it remains challenging to deploy state-of-the-art methods and achieve real-time performance. To address this, we propose P3Net (PointNet-based Path Planning Networks), a lightweight deep-learning-based method for 2D/3D path planning, and design an IP core (P3NetCore) targeting FPGA SoCs (Xilinx ZCU104). P3Net improves the algorithm and model architecture of the recently-proposed MPNet. P3Net employs an encoder with a PointNet backbone and a lightweight planning network in order to extract robust point cloud features and sample path points from a promising region. P3NetCore is comprised of the fully-pipelined point cloud encoder, batched bidirectional path planner, and parallel collision checker, to cover most part of the algorithm. On the 2D (3D) datasets, P3Net with the IP core runs 30.52-186.36x and 7.68-143.62x (15.69-93.26x and 5.30-45.27x) faster than ARM Cortex CPU and Nvidia Jetson while only consuming 0.255W (0.809W), and is up to 1278.14x (455.34x) power-efficient than the workstation. P3Net improves the success rate by up to 28.2% and plans a near-optimal path, leading to a significantly better tradeoff between computation and solution quality than MPNet and the state-of-the-art sampling-based methods.</description><subject>Algorithms</subject><subject>Coders</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Field programmable gate arrays</subject><subject>FPGA</subject><subject>IP (Internet Protocol)</subject><subject>Lightweight</subject><subject>neural path planning</subject><subject>Parallel processing</subject><subject>Path planning</subject><subject>Planning</subject><subject>Point cloud compression</subject><subject>point cloud processing</subject><subject>PointNet</subject><subject>Robots</subject><subject>State of the art</subject><subject>Weight reduction</subject><issn>0018-9340</issn><issn>1557-9956</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNpNkE1PwkAQhjdGExE9e_HQxHNhv7d7rEUQ00QOeN5Mt7MIwRa35eC_twQOHiaTzDzvTPIQ8sjohDFqp-tiwimXEyGMyay6IiOmlEmtVfqajChlWWqFpLfkrut2lFLNqR2R97xJlk2Pmwg91sl8tciT3Hvc4zBoYxKGmiEekhIhNttmk75AN4B8NhWzZAX9V7LaQ3Pa3JObAPsOHy59TD7nr-viLS0_FssiL1PPM9WnwANIiQK98IrVmc8yq6GqKAZmBdTSQwCqNaCubMBKWDQGahR1HSo0tRiT5_PdQ2x_jtj1btceYzO8dIIqwa221gzU9Ez52HZdxOAOcfsN8dcx6k7C3LpwJ2HuImxIPJ0TW0T8R0sjZabFH-YAZi4</recordid><startdate>20240601</startdate><enddate>20240601</enddate><creator>Sugiura, Keisuke</creator><creator>Matsutani, Hiroki</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-8534-2381</orcidid><orcidid>https://orcid.org/0000-0001-9578-3842</orcidid></search><sort><creationdate>20240601</creationdate><title>An Integrated FPGA Accelerator for Deep Learning-Based 2D/3D Path Planning</title><author>Sugiura, Keisuke ; Matsutani, Hiroki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c285t-a2fa44e3ec3c51d8c8896abb0ef193ad4cafa066ae6b9feb39e77ade3ddfbe7d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Coders</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Field programmable gate arrays</topic><topic>FPGA</topic><topic>IP (Internet Protocol)</topic><topic>Lightweight</topic><topic>neural path planning</topic><topic>Parallel processing</topic><topic>Path planning</topic><topic>Planning</topic><topic>Point cloud compression</topic><topic>point cloud processing</topic><topic>PointNet</topic><topic>Robots</topic><topic>State of the art</topic><topic>Weight reduction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sugiura, Keisuke</creatorcontrib><creatorcontrib>Matsutani, Hiroki</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on computers</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sugiura, Keisuke</au><au>Matsutani, Hiroki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Integrated FPGA Accelerator for Deep Learning-Based 2D/3D Path Planning</atitle><jtitle>IEEE transactions on computers</jtitle><stitle>TC</stitle><date>2024-06-01</date><risdate>2024</risdate><volume>73</volume><issue>6</issue><spage>1442</spage><epage>1456</epage><pages>1442-1456</pages><issn>0018-9340</issn><eissn>1557-9956</eissn><coden>ITCOB4</coden><abstract>Path planning is a crucial component for realizing the autonomy of mobile robots. However, due to limited computational resources on mobile robots, it remains challenging to deploy state-of-the-art methods and achieve real-time performance. To address this, we propose P3Net (PointNet-based Path Planning Networks), a lightweight deep-learning-based method for 2D/3D path planning, and design an IP core (P3NetCore) targeting FPGA SoCs (Xilinx ZCU104). P3Net improves the algorithm and model architecture of the recently-proposed MPNet. P3Net employs an encoder with a PointNet backbone and a lightweight planning network in order to extract robust point cloud features and sample path points from a promising region. P3NetCore is comprised of the fully-pipelined point cloud encoder, batched bidirectional path planner, and parallel collision checker, to cover most part of the algorithm. On the 2D (3D) datasets, P3Net with the IP core runs 30.52-186.36x and 7.68-143.62x (15.69-93.26x and 5.30-45.27x) faster than ARM Cortex CPU and Nvidia Jetson while only consuming 0.255W (0.809W), and is up to 1278.14x (455.34x) power-efficient than the workstation. P3Net improves the success rate by up to 28.2% and plans a near-optimal path, leading to a significantly better tradeoff between computation and solution quality than MPNet and the state-of-the-art sampling-based methods.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TC.2024.3377895</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-8534-2381</orcidid><orcidid>https://orcid.org/0000-0001-9578-3842</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0018-9340
ispartof IEEE transactions on computers, 2024-06, Vol.73 (6), p.1442-1456
issn 0018-9340
1557-9956
language eng
recordid cdi_ieee_primary_10474486
source IEEE Electronic Library (IEL)
subjects Algorithms
Coders
Deep learning
Feature extraction
Field programmable gate arrays
FPGA
IP (Internet Protocol)
Lightweight
neural path planning
Parallel processing
Path planning
Planning
Point cloud compression
point cloud processing
PointNet
Robots
State of the art
Weight reduction
title An Integrated FPGA Accelerator for Deep Learning-Based 2D/3D Path Planning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T10%3A49%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Integrated%20FPGA%20Accelerator%20for%20Deep%20Learning-Based%202D/3D%20Path%20Planning&rft.jtitle=IEEE%20transactions%20on%20computers&rft.au=Sugiura,%20Keisuke&rft.date=2024-06-01&rft.volume=73&rft.issue=6&rft.spage=1442&rft.epage=1456&rft.pages=1442-1456&rft.issn=0018-9340&rft.eissn=1557-9956&rft.coden=ITCOB4&rft_id=info:doi/10.1109/TC.2024.3377895&rft_dat=%3Cproquest_ieee_%3E3053296997%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3053296997&rft_id=info:pmid/&rft_ieee_id=10474486&rfr_iscdi=true