Estimating 3D Motion and Forces of Human–Object Interactions from Internet Videos
In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person together with the object pose, the contact positions and the contact forces exerted on the human body. The m...
Gespeichert in:
Veröffentlicht in: | International journal of computer vision 2022-02, Vol.130 (2), p.363-383 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 383 |
---|---|
container_issue | 2 |
container_start_page | 363 |
container_title | International journal of computer vision |
container_volume | 130 |
creator | Li, Zongmian Sedlar, Jiri Carpentier, Justin Laptev, Ivan Mansard, Nicolas Sivic, Josef |
description | In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person together with the object pose, the contact positions and the contact forces exerted on the human body. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of the interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the 2D position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent video + MoCap dataset capturing typical parkour actions, and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments. |
doi_str_mv | 10.1007/s11263-021-01540-1 |
format | Article |
fullrecord | <record><control><sourceid>gale_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_03420419v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A694041420</galeid><sourcerecordid>A694041420</sourcerecordid><originalsourceid>FETCH-LOGICAL-c426t-841adefe4422af58b77f6e5063ce597983467c72f38bacc9c549a20c14c58e4e3</originalsourceid><addsrcrecordid>eNp9kc1uUzEQhS0EEqHwAqwssWJxy_j3Xi-j0pJIQZUosLUcZxxulNjFdirY8Q68IU-Cw62o2CAvLI2-c3RmDiEvGZwzgP5NYYxr0QFnHTAloWOPyIypXnRMgnpMZmA4dEob9pQ8K2UHAHzgYkZuLksdD66OcUvFW_o-1TFF6uKGXqXssdAU6OJ4cPHXj5_X6x36SpexYnb-BBYacjpMk4iVfh43mMpz8iS4fcEX9_8Z-XR1-fFi0a2u3y0v5qvOS65rN0jmNhhQSs5dUMO674NGBVp4VKY3g5C69z0PYlg7741X0jgOnkmvBpQozsjryfeL29vb3NbI321yo13MV_Y0AyE5SGbuWGNfTextTl-PWKrdpWOOLZ7lmhumRbtdo84nauv2aMcYUm2btrfBw-hTxDC2-Vwb2Wyb90OEe0FjKn6rW3csxS5vPvzL8on1OZWSMfzNzMCeWrRTi7YlsX9atKdEYhKVBsct5ofc_1H9BqttnRk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2629163021</pqid></control><display><type>article</type><title>Estimating 3D Motion and Forces of Human–Object Interactions from Internet Videos</title><source>Springer Nature - Complete Springer Journals</source><creator>Li, Zongmian ; Sedlar, Jiri ; Carpentier, Justin ; Laptev, Ivan ; Mansard, Nicolas ; Sivic, Josef</creator><creatorcontrib>Li, Zongmian ; Sedlar, Jiri ; Carpentier, Justin ; Laptev, Ivan ; Mansard, Nicolas ; Sivic, Josef</creatorcontrib><description>In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person together with the object pose, the contact positions and the contact forces exerted on the human body. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of the interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the 2D position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent video + MoCap dataset capturing typical parkour actions, and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-021-01540-1</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Actuation ; Analysis ; Artificial Intelligence ; Computer Imaging ; Computer Science ; Contact force ; Datasets ; Human body ; Human motion ; Image Processing and Computer Vision ; Internet ; Internet videos ; Optimization ; Pattern Recognition ; Pattern Recognition and Graphics ; Robotics ; Three dimensional motion ; Trajectory optimization ; Video ; Vision</subject><ispartof>International journal of computer vision, 2022-02, Vol.130 (2), p.363-383</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>COPYRIGHT 2022 Springer</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c426t-841adefe4422af58b77f6e5063ce597983467c72f38bacc9c549a20c14c58e4e3</citedby><cites>FETCH-LOGICAL-c426t-841adefe4422af58b77f6e5063ce597983467c72f38bacc9c549a20c14c58e4e3</cites><orcidid>0000-0002-2864-0590 ; 0000-0001-6585-2894</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-021-01540-1$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-021-01540-1$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>230,314,776,780,881,27901,27902,41464,42533,51294</link.rule.ids><backlink>$$Uhttps://hal.science/hal-03420419$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Zongmian</creatorcontrib><creatorcontrib>Sedlar, Jiri</creatorcontrib><creatorcontrib>Carpentier, Justin</creatorcontrib><creatorcontrib>Laptev, Ivan</creatorcontrib><creatorcontrib>Mansard, Nicolas</creatorcontrib><creatorcontrib>Sivic, Josef</creatorcontrib><title>Estimating 3D Motion and Forces of Human–Object Interactions from Internet Videos</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person together with the object pose, the contact positions and the contact forces exerted on the human body. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of the interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the 2D position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent video + MoCap dataset capturing typical parkour actions, and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments.</description><subject>Actuation</subject><subject>Analysis</subject><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Contact force</subject><subject>Datasets</subject><subject>Human body</subject><subject>Human motion</subject><subject>Image Processing and Computer Vision</subject><subject>Internet</subject><subject>Internet videos</subject><subject>Optimization</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Robotics</subject><subject>Three dimensional motion</subject><subject>Trajectory optimization</subject><subject>Video</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp9kc1uUzEQhS0EEqHwAqwssWJxy_j3Xi-j0pJIQZUosLUcZxxulNjFdirY8Q68IU-Cw62o2CAvLI2-c3RmDiEvGZwzgP5NYYxr0QFnHTAloWOPyIypXnRMgnpMZmA4dEob9pQ8K2UHAHzgYkZuLksdD66OcUvFW_o-1TFF6uKGXqXssdAU6OJ4cPHXj5_X6x36SpexYnb-BBYacjpMk4iVfh43mMpz8iS4fcEX9_8Z-XR1-fFi0a2u3y0v5qvOS65rN0jmNhhQSs5dUMO674NGBVp4VKY3g5C69z0PYlg7741X0jgOnkmvBpQozsjryfeL29vb3NbI321yo13MV_Y0AyE5SGbuWGNfTextTl-PWKrdpWOOLZ7lmhumRbtdo84nauv2aMcYUm2btrfBw-hTxDC2-Vwb2Wyb90OEe0FjKn6rW3csxS5vPvzL8on1OZWSMfzNzMCeWrRTi7YlsX9atKdEYhKVBsct5ofc_1H9BqttnRk</recordid><startdate>20220201</startdate><enddate>20220201</enddate><creator>Li, Zongmian</creator><creator>Sedlar, Jiri</creator><creator>Carpentier, Justin</creator><creator>Laptev, Ivan</creator><creator>Mansard, Nicolas</creator><creator>Sivic, Josef</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><general>Springer Verlag</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0002-2864-0590</orcidid><orcidid>https://orcid.org/0000-0001-6585-2894</orcidid></search><sort><creationdate>20220201</creationdate><title>Estimating 3D Motion and Forces of Human–Object Interactions from Internet Videos</title><author>Li, Zongmian ; Sedlar, Jiri ; Carpentier, Justin ; Laptev, Ivan ; Mansard, Nicolas ; Sivic, Josef</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c426t-841adefe4422af58b77f6e5063ce597983467c72f38bacc9c549a20c14c58e4e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Actuation</topic><topic>Analysis</topic><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Contact force</topic><topic>Datasets</topic><topic>Human body</topic><topic>Human motion</topic><topic>Image Processing and Computer Vision</topic><topic>Internet</topic><topic>Internet videos</topic><topic>Optimization</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Robotics</topic><topic>Three dimensional motion</topic><topic>Trajectory optimization</topic><topic>Video</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Zongmian</creatorcontrib><creatorcontrib>Sedlar, Jiri</creatorcontrib><creatorcontrib>Carpentier, Justin</creatorcontrib><creatorcontrib>Laptev, Ivan</creatorcontrib><creatorcontrib>Mansard, Nicolas</creatorcontrib><creatorcontrib>Sivic, Josef</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Zongmian</au><au>Sedlar, Jiri</au><au>Carpentier, Justin</au><au>Laptev, Ivan</au><au>Mansard, Nicolas</au><au>Sivic, Josef</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Estimating 3D Motion and Forces of Human–Object Interactions from Internet Videos</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2022-02-01</date><risdate>2022</risdate><volume>130</volume><issue>2</issue><spage>363</spage><epage>383</epage><pages>363-383</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person together with the object pose, the contact positions and the contact forces exerted on the human body. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of the interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the 2D position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent video + MoCap dataset capturing typical parkour actions, and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-021-01540-1</doi><tpages>21</tpages><orcidid>https://orcid.org/0000-0002-2864-0590</orcidid><orcidid>https://orcid.org/0000-0001-6585-2894</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-5691 |
ispartof | International journal of computer vision, 2022-02, Vol.130 (2), p.363-383 |
issn | 0920-5691 1573-1405 |
language | eng |
recordid | cdi_hal_primary_oai_HAL_hal_03420419v1 |
source | Springer Nature - Complete Springer Journals |
subjects | Actuation Analysis Artificial Intelligence Computer Imaging Computer Science Contact force Datasets Human body Human motion Image Processing and Computer Vision Internet Internet videos Optimization Pattern Recognition Pattern Recognition and Graphics Robotics Three dimensional motion Trajectory optimization Video Vision |
title | Estimating 3D Motion and Forces of Human–Object Interactions from Internet Videos |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T23%3A15%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Estimating%203D%20Motion%20and%20Forces%20of%20Human%E2%80%93Object%20Interactions%20from%20Internet%20Videos&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Li,%20Zongmian&rft.date=2022-02-01&rft.volume=130&rft.issue=2&rft.spage=363&rft.epage=383&rft.pages=363-383&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-021-01540-1&rft_dat=%3Cgale_hal_p%3EA694041420%3C/gale_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2629163021&rft_id=info:pmid/&rft_galeid=A694041420&rfr_iscdi=true |