3D human pose estimation from depth maps using a deep combination of poses
Many real-world applications require the estimation of human body joints for higher-level tasks as, for example, human behaviour understanding. In recent years, depth sensors have become a popular approach to obtain three-dimensional information. The depth maps generated by these sensors provide inf...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2018-07 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Marin-Jimenez, Manuel J Romero-Ramirez, Francisco J Muñoz-Salinas, Rafael Medina-Carnicer, Rafael |
description | Many real-world applications require the estimation of human body joints for higher-level tasks as, for example, human behaviour understanding. In recent years, depth sensors have become a popular approach to obtain three-dimensional information. The depth maps generated by these sensors provide information that can be employed to disambiguate the poses observed in two-dimensional images. This work addresses the problem of 3D human pose estimation from depth maps employing a Deep Learning approach. We propose a model, named Deep Depth Pose (DDP), which receives a depth map containing a person and a set of predefined 3D prototype poses and returns the 3D position of the body joints of the person. In particular, DDP is defined as a ConvNet that computes the specific weights needed to linearly combine the prototypes for the given input. We have thoroughly evaluated DDP on the challenging 'ITOP' and 'UBC3V' datasets, which respectively depict realistic and synthetic samples, defining a new state-of-the-art on them. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2074057426</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2074057426</sourcerecordid><originalsourceid>FETCH-proquest_journals_20740574263</originalsourceid><addsrcrecordid>eNqNjM0KwjAQhIMgWLTvsOC5EDdp690fxLP3EjW1KSYbu837W9QH8DTwzTczExkqtSm2GnEhcuZeSolVjWWpMnFWe-iSNwEisQXLo_NmdBSgHcjD3caxA28iQ2IXHmAmZCPcyF9d-IrUfra8EvPWPNnmv1yK9fFw2Z2KONArTc9NT2kIU9WgrLUsa42V-s96A4WLPE4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2074057426</pqid></control><display><type>article</type><title>3D human pose estimation from depth maps using a deep combination of poses</title><source>Free E- Journals</source><creator>Marin-Jimenez, Manuel J ; Romero-Ramirez, Francisco J ; Muñoz-Salinas, Rafael ; Medina-Carnicer, Rafael</creator><creatorcontrib>Marin-Jimenez, Manuel J ; Romero-Ramirez, Francisco J ; Muñoz-Salinas, Rafael ; Medina-Carnicer, Rafael</creatorcontrib><description>Many real-world applications require the estimation of human body joints for higher-level tasks as, for example, human behaviour understanding. In recent years, depth sensors have become a popular approach to obtain three-dimensional information. The depth maps generated by these sensors provide information that can be employed to disambiguate the poses observed in two-dimensional images. This work addresses the problem of 3D human pose estimation from depth maps employing a Deep Learning approach. We propose a model, named Deep Depth Pose (DDP), which receives a depth map containing a person and a set of predefined 3D prototype poses and returns the 3D position of the body joints of the person. In particular, DDP is defined as a ConvNet that computes the specific weights needed to linearly combine the prototypes for the given input. We have thoroughly evaluated DDP on the challenging 'ITOP' and 'UBC3V' datasets, which respectively depict realistic and synthetic samples, defining a new state-of-the-art on them.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Human behavior ; Machine learning ; Sensors ; Three dimensional bodies</subject><ispartof>arXiv.org, 2018-07</ispartof><rights>2018. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Marin-Jimenez, Manuel J</creatorcontrib><creatorcontrib>Romero-Ramirez, Francisco J</creatorcontrib><creatorcontrib>Muñoz-Salinas, Rafael</creatorcontrib><creatorcontrib>Medina-Carnicer, Rafael</creatorcontrib><title>3D human pose estimation from depth maps using a deep combination of poses</title><title>arXiv.org</title><description>Many real-world applications require the estimation of human body joints for higher-level tasks as, for example, human behaviour understanding. In recent years, depth sensors have become a popular approach to obtain three-dimensional information. The depth maps generated by these sensors provide information that can be employed to disambiguate the poses observed in two-dimensional images. This work addresses the problem of 3D human pose estimation from depth maps employing a Deep Learning approach. We propose a model, named Deep Depth Pose (DDP), which receives a depth map containing a person and a set of predefined 3D prototype poses and returns the 3D position of the body joints of the person. In particular, DDP is defined as a ConvNet that computes the specific weights needed to linearly combine the prototypes for the given input. We have thoroughly evaluated DDP on the challenging 'ITOP' and 'UBC3V' datasets, which respectively depict realistic and synthetic samples, defining a new state-of-the-art on them.</description><subject>Human behavior</subject><subject>Machine learning</subject><subject>Sensors</subject><subject>Three dimensional bodies</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjM0KwjAQhIMgWLTvsOC5EDdp690fxLP3EjW1KSYbu837W9QH8DTwzTczExkqtSm2GnEhcuZeSolVjWWpMnFWe-iSNwEisQXLo_NmdBSgHcjD3caxA28iQ2IXHmAmZCPcyF9d-IrUfra8EvPWPNnmv1yK9fFw2Z2KONArTc9NT2kIU9WgrLUsa42V-s96A4WLPE4</recordid><startdate>20180714</startdate><enddate>20180714</enddate><creator>Marin-Jimenez, Manuel J</creator><creator>Romero-Ramirez, Francisco J</creator><creator>Muñoz-Salinas, Rafael</creator><creator>Medina-Carnicer, Rafael</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20180714</creationdate><title>3D human pose estimation from depth maps using a deep combination of poses</title><author>Marin-Jimenez, Manuel J ; Romero-Ramirez, Francisco J ; Muñoz-Salinas, Rafael ; Medina-Carnicer, Rafael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20740574263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Human behavior</topic><topic>Machine learning</topic><topic>Sensors</topic><topic>Three dimensional bodies</topic><toplevel>online_resources</toplevel><creatorcontrib>Marin-Jimenez, Manuel J</creatorcontrib><creatorcontrib>Romero-Ramirez, Francisco J</creatorcontrib><creatorcontrib>Muñoz-Salinas, Rafael</creatorcontrib><creatorcontrib>Medina-Carnicer, Rafael</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Marin-Jimenez, Manuel J</au><au>Romero-Ramirez, Francisco J</au><au>Muñoz-Salinas, Rafael</au><au>Medina-Carnicer, Rafael</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>3D human pose estimation from depth maps using a deep combination of poses</atitle><jtitle>arXiv.org</jtitle><date>2018-07-14</date><risdate>2018</risdate><eissn>2331-8422</eissn><abstract>Many real-world applications require the estimation of human body joints for higher-level tasks as, for example, human behaviour understanding. In recent years, depth sensors have become a popular approach to obtain three-dimensional information. The depth maps generated by these sensors provide information that can be employed to disambiguate the poses observed in two-dimensional images. This work addresses the problem of 3D human pose estimation from depth maps employing a Deep Learning approach. We propose a model, named Deep Depth Pose (DDP), which receives a depth map containing a person and a set of predefined 3D prototype poses and returns the 3D position of the body joints of the person. In particular, DDP is defined as a ConvNet that computes the specific weights needed to linearly combine the prototypes for the given input. We have thoroughly evaluated DDP on the challenging 'ITOP' and 'UBC3V' datasets, which respectively depict realistic and synthetic samples, defining a new state-of-the-art on them.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2018-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2074057426 |
source | Free E- Journals |
subjects | Human behavior Machine learning Sensors Three dimensional bodies |
title | 3D human pose estimation from depth maps using a deep combination of poses |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T19%3A42%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=3D%20human%20pose%20estimation%20from%20depth%20maps%20using%20a%20deep%20combination%20of%20poses&rft.jtitle=arXiv.org&rft.au=Marin-Jimenez,%20Manuel%20J&rft.date=2018-07-14&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2074057426%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2074057426&rft_id=info:pmid/&rfr_iscdi=true |