MOTPose: Multi-object 6D Pose Estimation for Dynamic Video Sequences using Attention-based Temporal Fusion

Cluttered bin-picking environments are challenging for pose estimation models. Despite the impressive progress enabled by deep learning, single-view RGB pose estimation models perform poorly in cluttered dynamic environments. Imbuing the rich temporal information contained in the video of scenes has...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Periyasamy, Arul Selvam, Behnke, Sven
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Periyasamy, Arul Selvam
Behnke, Sven
description Cluttered bin-picking environments are challenging for pose estimation models. Despite the impressive progress enabled by deep learning, single-view RGB pose estimation models perform poorly in cluttered dynamic environments. Imbuing the rich temporal information contained in the video of scenes has the potential to enhance models ability to deal with the adverse effects of occlusion and the dynamic nature of the environments. Moreover, joint object detection and pose estimation models are better suited to leverage the co-dependent nature of the tasks for improving the accuracy of both tasks. To this end, we propose attention-based temporal fusion for multi-object 6D pose estimation that accumulates information across multiple frames of a video sequence. Our MOTPose method takes a sequence of images as input and performs joint object detection and pose estimation for all objects in one forward pass. It learns to aggregate both object embeddings and object parameters over multiple time steps using cross-attention-based fusion modules. We evaluate our method on the physically-realistic cluttered bin-picking dataset SynPick and the YCB-Video dataset and demonstrate improved pose estimation accuracy as well as better object detection accuracy
doi_str_mv 10.48550/arxiv.2403.09309
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_09309</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_09309</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-f581e799f1b1f1a478d3b7d81426bd3f9e4c3a879e905895d8970fbe967e11183</originalsourceid><addsrcrecordid>eNotz01OwzAQBWBvWKDCAVjhCyTYdRLb7Kr-AFKrIhGxjex4jFwldokdRG9P07Ia6c3Tkz6EHijJC1GW5EkNv-4nnxeE5UQyIm_RYbev30OEZ7wbu-SyoA_QJlyt8JTidUyuV8kFj20Y8OrkVe9a_OkMBPwB3yP4FiIeo_NfeJES-KmbaRXB4Br6YxhUhzfnf_B36MaqLsL9_52herOul6_Zdv_ytlxsM1VxmdlSUOBSWqqppargwjDNjaDFvNKGWQlFy5TgEiQphSyNkJxYDbLiQCkVbIYer7MXbHMczoDh1Ezo5oJmf-KFUiU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MOTPose: Multi-object 6D Pose Estimation for Dynamic Video Sequences using Attention-based Temporal Fusion</title><source>arXiv.org</source><creator>Periyasamy, Arul Selvam ; Behnke, Sven</creator><creatorcontrib>Periyasamy, Arul Selvam ; Behnke, Sven</creatorcontrib><description>Cluttered bin-picking environments are challenging for pose estimation models. Despite the impressive progress enabled by deep learning, single-view RGB pose estimation models perform poorly in cluttered dynamic environments. Imbuing the rich temporal information contained in the video of scenes has the potential to enhance models ability to deal with the adverse effects of occlusion and the dynamic nature of the environments. Moreover, joint object detection and pose estimation models are better suited to leverage the co-dependent nature of the tasks for improving the accuracy of both tasks. To this end, we propose attention-based temporal fusion for multi-object 6D pose estimation that accumulates information across multiple frames of a video sequence. Our MOTPose method takes a sequence of images as input and performs joint object detection and pose estimation for all objects in one forward pass. It learns to aggregate both object embeddings and object parameters over multiple time steps using cross-attention-based fusion modules. We evaluate our method on the physically-realistic cluttered bin-picking dataset SynPick and the YCB-Video dataset and demonstrate improved pose estimation accuracy as well as better object detection accuracy</description><identifier>DOI: 10.48550/arxiv.2403.09309</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2024-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.09309$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.09309$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Periyasamy, Arul Selvam</creatorcontrib><creatorcontrib>Behnke, Sven</creatorcontrib><title>MOTPose: Multi-object 6D Pose Estimation for Dynamic Video Sequences using Attention-based Temporal Fusion</title><description>Cluttered bin-picking environments are challenging for pose estimation models. Despite the impressive progress enabled by deep learning, single-view RGB pose estimation models perform poorly in cluttered dynamic environments. Imbuing the rich temporal information contained in the video of scenes has the potential to enhance models ability to deal with the adverse effects of occlusion and the dynamic nature of the environments. Moreover, joint object detection and pose estimation models are better suited to leverage the co-dependent nature of the tasks for improving the accuracy of both tasks. To this end, we propose attention-based temporal fusion for multi-object 6D pose estimation that accumulates information across multiple frames of a video sequence. Our MOTPose method takes a sequence of images as input and performs joint object detection and pose estimation for all objects in one forward pass. It learns to aggregate both object embeddings and object parameters over multiple time steps using cross-attention-based fusion modules. We evaluate our method on the physically-realistic cluttered bin-picking dataset SynPick and the YCB-Video dataset and demonstrate improved pose estimation accuracy as well as better object detection accuracy</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz01OwzAQBWBvWKDCAVjhCyTYdRLb7Kr-AFKrIhGxjex4jFwldokdRG9P07Ia6c3Tkz6EHijJC1GW5EkNv-4nnxeE5UQyIm_RYbev30OEZ7wbu-SyoA_QJlyt8JTidUyuV8kFj20Y8OrkVe9a_OkMBPwB3yP4FiIeo_NfeJES-KmbaRXB4Br6YxhUhzfnf_B36MaqLsL9_52herOul6_Zdv_ytlxsM1VxmdlSUOBSWqqppargwjDNjaDFvNKGWQlFy5TgEiQphSyNkJxYDbLiQCkVbIYer7MXbHMczoDh1Ezo5oJmf-KFUiU</recordid><startdate>20240314</startdate><enddate>20240314</enddate><creator>Periyasamy, Arul Selvam</creator><creator>Behnke, Sven</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240314</creationdate><title>MOTPose: Multi-object 6D Pose Estimation for Dynamic Video Sequences using Attention-based Temporal Fusion</title><author>Periyasamy, Arul Selvam ; Behnke, Sven</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-f581e799f1b1f1a478d3b7d81426bd3f9e4c3a879e905895d8970fbe967e11183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Periyasamy, Arul Selvam</creatorcontrib><creatorcontrib>Behnke, Sven</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Periyasamy, Arul Selvam</au><au>Behnke, Sven</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MOTPose: Multi-object 6D Pose Estimation for Dynamic Video Sequences using Attention-based Temporal Fusion</atitle><date>2024-03-14</date><risdate>2024</risdate><abstract>Cluttered bin-picking environments are challenging for pose estimation models. Despite the impressive progress enabled by deep learning, single-view RGB pose estimation models perform poorly in cluttered dynamic environments. Imbuing the rich temporal information contained in the video of scenes has the potential to enhance models ability to deal with the adverse effects of occlusion and the dynamic nature of the environments. Moreover, joint object detection and pose estimation models are better suited to leverage the co-dependent nature of the tasks for improving the accuracy of both tasks. To this end, we propose attention-based temporal fusion for multi-object 6D pose estimation that accumulates information across multiple frames of a video sequence. Our MOTPose method takes a sequence of images as input and performs joint object detection and pose estimation for all objects in one forward pass. It learns to aggregate both object embeddings and object parameters over multiple time steps using cross-attention-based fusion modules. We evaluate our method on the physically-realistic cluttered bin-picking dataset SynPick and the YCB-Video dataset and demonstrate improved pose estimation accuracy as well as better object detection accuracy</abstract><doi>10.48550/arxiv.2403.09309</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.09309
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_09309
source arXiv.org
subjects Computer Science - Robotics
title MOTPose: Multi-object 6D Pose Estimation for Dynamic Video Sequences using Attention-based Temporal Fusion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T20%3A23%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MOTPose:%20Multi-object%206D%20Pose%20Estimation%20for%20Dynamic%20Video%20Sequences%20using%20Attention-based%20Temporal%20Fusion&rft.au=Periyasamy,%20Arul%20Selvam&rft.date=2024-03-14&rft_id=info:doi/10.48550/arxiv.2403.09309&rft_dat=%3Carxiv_GOX%3E2403_09309%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true