Deep Learning for Optimization of Trajectories for Quadrotors

This paper presents a novel learning-based trajectory planning framework for quadrotors that combines model-based optimization techniques with deep learning. Specifically, we formulate the trajectory optimization problem as a quadratic programming (QP) problem with dynamic and collision-free constra...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wu, Yuwei, Sun, Xiatao, Spasojevic, Igor, Kumar, Vijay
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wu, Yuwei
Sun, Xiatao
Spasojevic, Igor
Kumar, Vijay
description This paper presents a novel learning-based trajectory planning framework for quadrotors that combines model-based optimization techniques with deep learning. Specifically, we formulate the trajectory optimization problem as a quadratic programming (QP) problem with dynamic and collision-free constraints using piecewise trajectory segments through safe flight corridors [1]. We train neural networks to directly learn the time allocation for each segment to generate optimal smooth and fast trajectories. Furthermore, the constrained optimization problem is applied as a separate implicit layer for backpropagation in the network, for which the differential loss function can be obtained. We introduce an additional penalty function to penalize time allocations which result in solutions that violate the constraints to accelerate the training process and increase the success rate of the original optimization problem. To this end, we enable a flexible number of sequences of piece-wise trajectories by adding an extra end-of-sentence token during training. We illustrate the performance of the proposed method via extensive simulation and experimentation and show that it works in real time in diverse, cluttered environments.
doi_str_mv 10.48550/arxiv.2309.15191
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_15191</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_15191</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-d520264dde5cc840f87fa5405f0babb6bb8efff1aa17fefef07b00a8333a85d03</originalsourceid><addsrcrecordid>eNotj81qwzAQhHXJoSR9gJ6qF7C7sixbOeQQ0l8whILvZmXtFpXGMrJb2j59U7fMYWAYhvmEuFKQl9YYuMH0GT7yQsM2V0Zt1YXY3RKNsiFMQxheJMckj-McTuEb5xAHGVm2CV-pn2MKNC2F53f0KZ6DaSNWjG8TXf77WrT3d-3hMWuOD0-HfZNhVavMmwKKqvSeTN_bEtjWjKYEw-DQuco5S8ysEFXNdBbUDgCt1hqt8aDX4vpvdvnfjSmcMH11vxzdwqF_AIvNRBc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep Learning for Optimization of Trajectories for Quadrotors</title><source>arXiv.org</source><creator>Wu, Yuwei ; Sun, Xiatao ; Spasojevic, Igor ; Kumar, Vijay</creator><creatorcontrib>Wu, Yuwei ; Sun, Xiatao ; Spasojevic, Igor ; Kumar, Vijay</creatorcontrib><description>This paper presents a novel learning-based trajectory planning framework for quadrotors that combines model-based optimization techniques with deep learning. Specifically, we formulate the trajectory optimization problem as a quadratic programming (QP) problem with dynamic and collision-free constraints using piecewise trajectory segments through safe flight corridors [1]. We train neural networks to directly learn the time allocation for each segment to generate optimal smooth and fast trajectories. Furthermore, the constrained optimization problem is applied as a separate implicit layer for backpropagation in the network, for which the differential loss function can be obtained. We introduce an additional penalty function to penalize time allocations which result in solutions that violate the constraints to accelerate the training process and increase the success rate of the original optimization problem. To this end, we enable a flexible number of sequences of piece-wise trajectories by adding an extra end-of-sentence token during training. We illustrate the performance of the proposed method via extensive simulation and experimentation and show that it works in real time in diverse, cluttered environments.</description><identifier>DOI: 10.48550/arxiv.2309.15191</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2023-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.15191$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.15191$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wu, Yuwei</creatorcontrib><creatorcontrib>Sun, Xiatao</creatorcontrib><creatorcontrib>Spasojevic, Igor</creatorcontrib><creatorcontrib>Kumar, Vijay</creatorcontrib><title>Deep Learning for Optimization of Trajectories for Quadrotors</title><description>This paper presents a novel learning-based trajectory planning framework for quadrotors that combines model-based optimization techniques with deep learning. Specifically, we formulate the trajectory optimization problem as a quadratic programming (QP) problem with dynamic and collision-free constraints using piecewise trajectory segments through safe flight corridors [1]. We train neural networks to directly learn the time allocation for each segment to generate optimal smooth and fast trajectories. Furthermore, the constrained optimization problem is applied as a separate implicit layer for backpropagation in the network, for which the differential loss function can be obtained. We introduce an additional penalty function to penalize time allocations which result in solutions that violate the constraints to accelerate the training process and increase the success rate of the original optimization problem. To this end, we enable a flexible number of sequences of piece-wise trajectories by adding an extra end-of-sentence token during training. We illustrate the performance of the proposed method via extensive simulation and experimentation and show that it works in real time in diverse, cluttered environments.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81qwzAQhHXJoSR9gJ6qF7C7sixbOeQQ0l8whILvZmXtFpXGMrJb2j59U7fMYWAYhvmEuFKQl9YYuMH0GT7yQsM2V0Zt1YXY3RKNsiFMQxheJMckj-McTuEb5xAHGVm2CV-pn2MKNC2F53f0KZ6DaSNWjG8TXf77WrT3d-3hMWuOD0-HfZNhVavMmwKKqvSeTN_bEtjWjKYEw-DQuco5S8ysEFXNdBbUDgCt1hqt8aDX4vpvdvnfjSmcMH11vxzdwqF_AIvNRBc</recordid><startdate>20230926</startdate><enddate>20230926</enddate><creator>Wu, Yuwei</creator><creator>Sun, Xiatao</creator><creator>Spasojevic, Igor</creator><creator>Kumar, Vijay</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230926</creationdate><title>Deep Learning for Optimization of Trajectories for Quadrotors</title><author>Wu, Yuwei ; Sun, Xiatao ; Spasojevic, Igor ; Kumar, Vijay</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-d520264dde5cc840f87fa5405f0babb6bb8efff1aa17fefef07b00a8333a85d03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Yuwei</creatorcontrib><creatorcontrib>Sun, Xiatao</creatorcontrib><creatorcontrib>Spasojevic, Igor</creatorcontrib><creatorcontrib>Kumar, Vijay</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Yuwei</au><au>Sun, Xiatao</au><au>Spasojevic, Igor</au><au>Kumar, Vijay</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Learning for Optimization of Trajectories for Quadrotors</atitle><date>2023-09-26</date><risdate>2023</risdate><abstract>This paper presents a novel learning-based trajectory planning framework for quadrotors that combines model-based optimization techniques with deep learning. Specifically, we formulate the trajectory optimization problem as a quadratic programming (QP) problem with dynamic and collision-free constraints using piecewise trajectory segments through safe flight corridors [1]. We train neural networks to directly learn the time allocation for each segment to generate optimal smooth and fast trajectories. Furthermore, the constrained optimization problem is applied as a separate implicit layer for backpropagation in the network, for which the differential loss function can be obtained. We introduce an additional penalty function to penalize time allocations which result in solutions that violate the constraints to accelerate the training process and increase the success rate of the original optimization problem. To this end, we enable a flexible number of sequences of piece-wise trajectories by adding an extra end-of-sentence token during training. We illustrate the performance of the proposed method via extensive simulation and experimentation and show that it works in real time in diverse, cluttered environments.</abstract><doi>10.48550/arxiv.2309.15191</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.15191
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_15191
source arXiv.org
subjects Computer Science - Robotics
title Deep Learning for Optimization of Trajectories for Quadrotors
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T18%3A34%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Learning%20for%20Optimization%20of%20Trajectories%20for%20Quadrotors&rft.au=Wu,%20Yuwei&rft.date=2023-09-26&rft_id=info:doi/10.48550/arxiv.2309.15191&rft_dat=%3Carxiv_GOX%3E2309_15191%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true