Simple yet effective 3D ego-pose lift-up based on vector and distance for a mounted omnidirectional camera
Following the advances in convolutional neural networks and synthetic data generation, 3D egocentric body pose estimations from a mounted fisheye camera have been developed. Previous works estimated 3D joint positions from raw image pixels and intermediate supervision during the process. The mounted...
Gespeichert in:
Veröffentlicht in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-02, Vol.53 (3), p.2616-2628 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2628 |
---|---|
container_issue | 3 |
container_start_page | 2616 |
container_title | Applied intelligence (Dordrecht, Netherlands) |
container_volume | 53 |
creator | Miura, Teppei Sako, Shinji |
description | Following the advances in convolutional neural networks and synthetic data generation, 3D egocentric body pose estimations from a mounted fisheye camera have been developed. Previous works estimated 3D joint positions from raw image pixels and intermediate supervision during the process. The mounted fisheye camera captures notably different images that are affected by the optical properties of the lens, angle of views, and setup positions. Therefore, 3D ego-pose estimation from a mounted fisheye camera must be trained for each set of camera optics and setup. We propose a 3D ego-pose estimation from a single mounted omnidirectional camera that captures the entire circumference by back-to-back dual fisheye cameras. The omnidirectional camera can capture the user’s body in the 360
∘
field of view under a wide variety of motions. We also propose a simple feed-forward network model to estimate 3D joint positions from 2D joint locations. The lift-up model can be used in real time yet obtains accuracy comparable to those of previous works on our new dataset. Moreover, our model is trainable with the ground truth 3D joint positions and the unit vectors toward the 3D joint positions, which are easily generated from existing publicly available 3D mocap datasets. This advantage alleviates the data collection and training burden due to changes in the camera optics and setups, although it is limited to the effect after the 2D joint location estimation. |
doi_str_mv | 10.1007/s10489-022-03417-3 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2763975984</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2763975984</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-5ddcd6bae69a10eb006313496a2fcc0a860e4a4f2a46061fce51c1dc8ad57ed53</originalsourceid><addsrcrecordid>eNp9kEtLxDAUhYMoOD7-gKuA6-hNk6bNUsYnDLhQwV3IJDdDh7apSWdg_r0dR3Dn6sLhO4fLR8gVhxsOUN1mDrLWDIqCgZC8YuKIzHhZCVZJXR2TGehCMqX05yk5y3kNAEIAn5H1W9MNLdIdjhRDQDc2W6TinuIqsiFmpG0TRrYZ6NJm9DT2dDtBMVHbe-qbPNreIQ37gHZx0497qOsb36T9WOxtS53tMNkLchJsm_Hy956Tj8eH9_kzW7w-vczvFswVWoys9N55tbSotOWASwAluJBa2SI4B7ZWgNLKUFipQPHgsOSOe1dbX1boS3FOrg-7Q4pfG8yjWcdNmv7IpqiU0FWpazlRxYFyKeacMJghNZ1NO8PB7J2ag1MzOTU_To2YSuJQyhPcrzD9Tf_T-gYyaHr1</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2763975984</pqid></control><display><type>article</type><title>Simple yet effective 3D ego-pose lift-up based on vector and distance for a mounted omnidirectional camera</title><source>SpringerLink Journals - AutoHoldings</source><creator>Miura, Teppei ; Sako, Shinji</creator><creatorcontrib>Miura, Teppei ; Sako, Shinji</creatorcontrib><description>Following the advances in convolutional neural networks and synthetic data generation, 3D egocentric body pose estimations from a mounted fisheye camera have been developed. Previous works estimated 3D joint positions from raw image pixels and intermediate supervision during the process. The mounted fisheye camera captures notably different images that are affected by the optical properties of the lens, angle of views, and setup positions. Therefore, 3D ego-pose estimation from a mounted fisheye camera must be trained for each set of camera optics and setup. We propose a 3D ego-pose estimation from a single mounted omnidirectional camera that captures the entire circumference by back-to-back dual fisheye cameras. The omnidirectional camera can capture the user’s body in the 360
∘
field of view under a wide variety of motions. We also propose a simple feed-forward network model to estimate 3D joint positions from 2D joint locations. The lift-up model can be used in real time yet obtains accuracy comparable to those of previous works on our new dataset. Moreover, our model is trainable with the ground truth 3D joint positions and the unit vectors toward the 3D joint positions, which are easily generated from existing publicly available 3D mocap datasets. This advantage alleviates the data collection and training burden due to changes in the camera optics and setups, although it is limited to the effect after the 2D joint location estimation.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-022-03417-3</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Artificial neural networks ; Cameras ; Computer Science ; Data collection ; Datasets ; Machines ; Manufacturing ; Mechanical Engineering ; Optical properties ; Pose estimation ; Processes ; Three dimensional models</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2023-02, Vol.53 (3), p.2616-2628</ispartof><rights>The Author(s) 2022</rights><rights>The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-5ddcd6bae69a10eb006313496a2fcc0a860e4a4f2a46061fce51c1dc8ad57ed53</citedby><cites>FETCH-LOGICAL-c293t-5ddcd6bae69a10eb006313496a2fcc0a860e4a4f2a46061fce51c1dc8ad57ed53</cites><orcidid>0000-0001-9628-1414</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-022-03417-3$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-022-03417-3$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Miura, Teppei</creatorcontrib><creatorcontrib>Sako, Shinji</creatorcontrib><title>Simple yet effective 3D ego-pose lift-up based on vector and distance for a mounted omnidirectional camera</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>Following the advances in convolutional neural networks and synthetic data generation, 3D egocentric body pose estimations from a mounted fisheye camera have been developed. Previous works estimated 3D joint positions from raw image pixels and intermediate supervision during the process. The mounted fisheye camera captures notably different images that are affected by the optical properties of the lens, angle of views, and setup positions. Therefore, 3D ego-pose estimation from a mounted fisheye camera must be trained for each set of camera optics and setup. We propose a 3D ego-pose estimation from a single mounted omnidirectional camera that captures the entire circumference by back-to-back dual fisheye cameras. The omnidirectional camera can capture the user’s body in the 360
∘
field of view under a wide variety of motions. We also propose a simple feed-forward network model to estimate 3D joint positions from 2D joint locations. The lift-up model can be used in real time yet obtains accuracy comparable to those of previous works on our new dataset. Moreover, our model is trainable with the ground truth 3D joint positions and the unit vectors toward the 3D joint positions, which are easily generated from existing publicly available 3D mocap datasets. This advantage alleviates the data collection and training burden due to changes in the camera optics and setups, although it is limited to the effect after the 2D joint location estimation.</description><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Cameras</subject><subject>Computer Science</subject><subject>Data collection</subject><subject>Datasets</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Optical properties</subject><subject>Pose estimation</subject><subject>Processes</subject><subject>Three dimensional models</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>BENPR</sourceid><recordid>eNp9kEtLxDAUhYMoOD7-gKuA6-hNk6bNUsYnDLhQwV3IJDdDh7apSWdg_r0dR3Dn6sLhO4fLR8gVhxsOUN1mDrLWDIqCgZC8YuKIzHhZCVZJXR2TGehCMqX05yk5y3kNAEIAn5H1W9MNLdIdjhRDQDc2W6TinuIqsiFmpG0TRrYZ6NJm9DT2dDtBMVHbe-qbPNreIQ37gHZx0497qOsb36T9WOxtS53tMNkLchJsm_Hy956Tj8eH9_kzW7w-vczvFswVWoys9N55tbSotOWASwAluJBa2SI4B7ZWgNLKUFipQPHgsOSOe1dbX1boS3FOrg-7Q4pfG8yjWcdNmv7IpqiU0FWpazlRxYFyKeacMJghNZ1NO8PB7J2ag1MzOTU_To2YSuJQyhPcrzD9Tf_T-gYyaHr1</recordid><startdate>20230201</startdate><enddate>20230201</enddate><creator>Miura, Teppei</creator><creator>Sako, Shinji</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-9628-1414</orcidid></search><sort><creationdate>20230201</creationdate><title>Simple yet effective 3D ego-pose lift-up based on vector and distance for a mounted omnidirectional camera</title><author>Miura, Teppei ; Sako, Shinji</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-5ddcd6bae69a10eb006313496a2fcc0a860e4a4f2a46061fce51c1dc8ad57ed53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Cameras</topic><topic>Computer Science</topic><topic>Data collection</topic><topic>Datasets</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Optical properties</topic><topic>Pose estimation</topic><topic>Processes</topic><topic>Three dimensional models</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Miura, Teppei</creatorcontrib><creatorcontrib>Sako, Shinji</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Miura, Teppei</au><au>Sako, Shinji</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Simple yet effective 3D ego-pose lift-up based on vector and distance for a mounted omnidirectional camera</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2023-02-01</date><risdate>2023</risdate><volume>53</volume><issue>3</issue><spage>2616</spage><epage>2628</epage><pages>2616-2628</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>Following the advances in convolutional neural networks and synthetic data generation, 3D egocentric body pose estimations from a mounted fisheye camera have been developed. Previous works estimated 3D joint positions from raw image pixels and intermediate supervision during the process. The mounted fisheye camera captures notably different images that are affected by the optical properties of the lens, angle of views, and setup positions. Therefore, 3D ego-pose estimation from a mounted fisheye camera must be trained for each set of camera optics and setup. We propose a 3D ego-pose estimation from a single mounted omnidirectional camera that captures the entire circumference by back-to-back dual fisheye cameras. The omnidirectional camera can capture the user’s body in the 360
∘
field of view under a wide variety of motions. We also propose a simple feed-forward network model to estimate 3D joint positions from 2D joint locations. The lift-up model can be used in real time yet obtains accuracy comparable to those of previous works on our new dataset. Moreover, our model is trainable with the ground truth 3D joint positions and the unit vectors toward the 3D joint positions, which are easily generated from existing publicly available 3D mocap datasets. This advantage alleviates the data collection and training burden due to changes in the camera optics and setups, although it is limited to the effect after the 2D joint location estimation.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-022-03417-3</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-9628-1414</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0924-669X |
ispartof | Applied intelligence (Dordrecht, Netherlands), 2023-02, Vol.53 (3), p.2616-2628 |
issn | 0924-669X 1573-7497 |
language | eng |
recordid | cdi_proquest_journals_2763975984 |
source | SpringerLink Journals - AutoHoldings |
subjects | Artificial Intelligence Artificial neural networks Cameras Computer Science Data collection Datasets Machines Manufacturing Mechanical Engineering Optical properties Pose estimation Processes Three dimensional models |
title | Simple yet effective 3D ego-pose lift-up based on vector and distance for a mounted omnidirectional camera |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T02%3A46%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Simple%20yet%20effective%203D%20ego-pose%20lift-up%20based%20on%20vector%20and%20distance%20for%20a%20mounted%20omnidirectional%20camera&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Miura,%20Teppei&rft.date=2023-02-01&rft.volume=53&rft.issue=3&rft.spage=2616&rft.epage=2628&rft.pages=2616-2628&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-022-03417-3&rft_dat=%3Cproquest_cross%3E2763975984%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2763975984&rft_id=info:pmid/&rfr_iscdi=true |