Vision-Based Construction Worker Activity Analysis Informed by Body Posture

AbstractActivity analysis of construction resources is generally performed by manually observing construction operations either in person or through recorded videos. It is thus prone to observer fatigue and bias and is of limited scalability and cost-effectiveness. Automating this procedure obviates...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of computing in civil engineering 2020-07, Vol.34 (4)
Hauptverfasser: Roberts, Dominic, Torres Calderon, Wilfredo, Tang, Shuai, Golparvar-Fard, Mani
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 4
container_start_page
container_title Journal of computing in civil engineering
container_volume 34
creator Roberts, Dominic
Torres Calderon, Wilfredo
Tang, Shuai
Golparvar-Fard, Mani
description AbstractActivity analysis of construction resources is generally performed by manually observing construction operations either in person or through recorded videos. It is thus prone to observer fatigue and bias and is of limited scalability and cost-effectiveness. Automating this procedure obviates these issues and can allow project teams to focus on performance improvement. This paper introduces a novel deep learning– and vision-based activity analysis framework that estimates and tracks two-dimensional (2D) worker pose and outputs per-frame worker activity labels given input red-green-blue (RGB) video footage of a construction worker operation. We used 317 annotated videos of bricklaying and plastering operations to train and validate the proposed method. This method obtained 82.6% mean average precision (mAP) for pose estimation and 72.6% multiple-object tracking accuracy (MOTA), and 81.3% multiple-object tracking precision (MOTP) for pose tracking. Cross-validation activity analysis accuracy of 78.5% was also obtained. We show that worker pose contributes to activity analysis results. This highlights the potential for using vision-based ergonomics assessment methods that rely on pose in conjunction with the proposed method for assessing the ergonomic viability of individual activities.
doi_str_mv 10.1061/(ASCE)CP.1943-5487.0000898
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2391179705</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2391179705</sourcerecordid><originalsourceid>FETCH-LOGICAL-a393t-e513f36443cddb7d4ad542fd4c4391a0fc202ceceb9430ba6cad52a4ea4898173</originalsourceid><addsrcrecordid>eNp1kF1LwzAUhoMoOKf_oeiNXnQmTdq03nVl6nDgwK_LkCYpdG7NzGkH_fembOqVuQk5PO_LyYPQJcETghNye52_FLObYjkhGaNhzFI-wf6kWXqERr-zYzTCacpDmmJyis4AVp6JEs5G6Om9hto24VSC0UFhG2hdp1o_Cj6s-zQuyP1rV7d9kDdy3UMNwbyprNt4vOyDqdV9sLTQds6co5NKrsFcHO4xerufvRaP4eL5YV7ki1DSjLahiQmtaMIYVVqXXDOpYxZVmilGMyJxpSIcKaNM6ffHpUyUByLJjGT-X4TTMbra926d_eoMtGJlO-e3AxH5BsIzjmNP3e0p5SyAM5XYunojXS8IFoM8IQZ5oliKQZQYRImDPB9O9mEJyvzV_yT_D34DxTh0Aw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2391179705</pqid></control><display><type>article</type><title>Vision-Based Construction Worker Activity Analysis Informed by Body Posture</title><source>American Society of Civil Engineers:NESLI2:Journals:2014</source><creator>Roberts, Dominic ; Torres Calderon, Wilfredo ; Tang, Shuai ; Golparvar-Fard, Mani</creator><creatorcontrib>Roberts, Dominic ; Torres Calderon, Wilfredo ; Tang, Shuai ; Golparvar-Fard, Mani</creatorcontrib><description>AbstractActivity analysis of construction resources is generally performed by manually observing construction operations either in person or through recorded videos. It is thus prone to observer fatigue and bias and is of limited scalability and cost-effectiveness. Automating this procedure obviates these issues and can allow project teams to focus on performance improvement. This paper introduces a novel deep learning– and vision-based activity analysis framework that estimates and tracks two-dimensional (2D) worker pose and outputs per-frame worker activity labels given input red-green-blue (RGB) video footage of a construction worker operation. We used 317 annotated videos of bricklaying and plastering operations to train and validate the proposed method. This method obtained 82.6% mean average precision (mAP) for pose estimation and 72.6% multiple-object tracking accuracy (MOTA), and 81.3% multiple-object tracking precision (MOTP) for pose tracking. Cross-validation activity analysis accuracy of 78.5% was also obtained. We show that worker pose contributes to activity analysis results. This highlights the potential for using vision-based ergonomics assessment methods that rely on pose in conjunction with the proposed method for assessing the ergonomic viability of individual activities.</description><identifier>ISSN: 0887-3801</identifier><identifier>EISSN: 1943-5487</identifier><identifier>DOI: 10.1061/(ASCE)CP.1943-5487.0000898</identifier><language>eng</language><publisher>New York: American Society of Civil Engineers</publisher><subject>Bricklaying ; Construction industry ; Ergonomics ; Machine learning ; Multiple target tracking ; Technical Papers ; Vision</subject><ispartof>Journal of computing in civil engineering, 2020-07, Vol.34 (4)</ispartof><rights>2020 American Society of Civil Engineers</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a393t-e513f36443cddb7d4ad542fd4c4391a0fc202ceceb9430ba6cad52a4ea4898173</citedby><cites>FETCH-LOGICAL-a393t-e513f36443cddb7d4ad542fd4c4391a0fc202ceceb9430ba6cad52a4ea4898173</cites><orcidid>0000-0001-7822-6972</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttp://ascelibrary.org/doi/pdf/10.1061/(ASCE)CP.1943-5487.0000898$$EPDF$$P50$$Gasce$$H</linktopdf><linktohtml>$$Uhttp://ascelibrary.org/doi/abs/10.1061/(ASCE)CP.1943-5487.0000898$$EHTML$$P50$$Gasce$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,75935,75943</link.rule.ids></links><search><creatorcontrib>Roberts, Dominic</creatorcontrib><creatorcontrib>Torres Calderon, Wilfredo</creatorcontrib><creatorcontrib>Tang, Shuai</creatorcontrib><creatorcontrib>Golparvar-Fard, Mani</creatorcontrib><title>Vision-Based Construction Worker Activity Analysis Informed by Body Posture</title><title>Journal of computing in civil engineering</title><description>AbstractActivity analysis of construction resources is generally performed by manually observing construction operations either in person or through recorded videos. It is thus prone to observer fatigue and bias and is of limited scalability and cost-effectiveness. Automating this procedure obviates these issues and can allow project teams to focus on performance improvement. This paper introduces a novel deep learning– and vision-based activity analysis framework that estimates and tracks two-dimensional (2D) worker pose and outputs per-frame worker activity labels given input red-green-blue (RGB) video footage of a construction worker operation. We used 317 annotated videos of bricklaying and plastering operations to train and validate the proposed method. This method obtained 82.6% mean average precision (mAP) for pose estimation and 72.6% multiple-object tracking accuracy (MOTA), and 81.3% multiple-object tracking precision (MOTP) for pose tracking. Cross-validation activity analysis accuracy of 78.5% was also obtained. We show that worker pose contributes to activity analysis results. This highlights the potential for using vision-based ergonomics assessment methods that rely on pose in conjunction with the proposed method for assessing the ergonomic viability of individual activities.</description><subject>Bricklaying</subject><subject>Construction industry</subject><subject>Ergonomics</subject><subject>Machine learning</subject><subject>Multiple target tracking</subject><subject>Technical Papers</subject><subject>Vision</subject><issn>0887-3801</issn><issn>1943-5487</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp1kF1LwzAUhoMoOKf_oeiNXnQmTdq03nVl6nDgwK_LkCYpdG7NzGkH_fembOqVuQk5PO_LyYPQJcETghNye52_FLObYjkhGaNhzFI-wf6kWXqERr-zYzTCacpDmmJyis4AVp6JEs5G6Om9hto24VSC0UFhG2hdp1o_Cj6s-zQuyP1rV7d9kDdy3UMNwbyprNt4vOyDqdV9sLTQds6co5NKrsFcHO4xerufvRaP4eL5YV7ki1DSjLahiQmtaMIYVVqXXDOpYxZVmilGMyJxpSIcKaNM6ffHpUyUByLJjGT-X4TTMbra926d_eoMtGJlO-e3AxH5BsIzjmNP3e0p5SyAM5XYunojXS8IFoM8IQZ5oliKQZQYRImDPB9O9mEJyvzV_yT_D34DxTh0Aw</recordid><startdate>20200701</startdate><enddate>20200701</enddate><creator>Roberts, Dominic</creator><creator>Torres Calderon, Wilfredo</creator><creator>Tang, Shuai</creator><creator>Golparvar-Fard, Mani</creator><general>American Society of Civil Engineers</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-7822-6972</orcidid></search><sort><creationdate>20200701</creationdate><title>Vision-Based Construction Worker Activity Analysis Informed by Body Posture</title><author>Roberts, Dominic ; Torres Calderon, Wilfredo ; Tang, Shuai ; Golparvar-Fard, Mani</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a393t-e513f36443cddb7d4ad542fd4c4391a0fc202ceceb9430ba6cad52a4ea4898173</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Bricklaying</topic><topic>Construction industry</topic><topic>Ergonomics</topic><topic>Machine learning</topic><topic>Multiple target tracking</topic><topic>Technical Papers</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Roberts, Dominic</creatorcontrib><creatorcontrib>Torres Calderon, Wilfredo</creatorcontrib><creatorcontrib>Tang, Shuai</creatorcontrib><creatorcontrib>Golparvar-Fard, Mani</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of computing in civil engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Roberts, Dominic</au><au>Torres Calderon, Wilfredo</au><au>Tang, Shuai</au><au>Golparvar-Fard, Mani</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Vision-Based Construction Worker Activity Analysis Informed by Body Posture</atitle><jtitle>Journal of computing in civil engineering</jtitle><date>2020-07-01</date><risdate>2020</risdate><volume>34</volume><issue>4</issue><issn>0887-3801</issn><eissn>1943-5487</eissn><abstract>AbstractActivity analysis of construction resources is generally performed by manually observing construction operations either in person or through recorded videos. It is thus prone to observer fatigue and bias and is of limited scalability and cost-effectiveness. Automating this procedure obviates these issues and can allow project teams to focus on performance improvement. This paper introduces a novel deep learning– and vision-based activity analysis framework that estimates and tracks two-dimensional (2D) worker pose and outputs per-frame worker activity labels given input red-green-blue (RGB) video footage of a construction worker operation. We used 317 annotated videos of bricklaying and plastering operations to train and validate the proposed method. This method obtained 82.6% mean average precision (mAP) for pose estimation and 72.6% multiple-object tracking accuracy (MOTA), and 81.3% multiple-object tracking precision (MOTP) for pose tracking. Cross-validation activity analysis accuracy of 78.5% was also obtained. We show that worker pose contributes to activity analysis results. This highlights the potential for using vision-based ergonomics assessment methods that rely on pose in conjunction with the proposed method for assessing the ergonomic viability of individual activities.</abstract><cop>New York</cop><pub>American Society of Civil Engineers</pub><doi>10.1061/(ASCE)CP.1943-5487.0000898</doi><orcidid>https://orcid.org/0000-0001-7822-6972</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0887-3801
ispartof Journal of computing in civil engineering, 2020-07, Vol.34 (4)
issn 0887-3801
1943-5487
language eng
recordid cdi_proquest_journals_2391179705
source American Society of Civil Engineers:NESLI2:Journals:2014
subjects Bricklaying
Construction industry
Ergonomics
Machine learning
Multiple target tracking
Technical Papers
Vision
title Vision-Based Construction Worker Activity Analysis Informed by Body Posture
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T22%3A29%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Vision-Based%20Construction%20Worker%20Activity%20Analysis%20Informed%20by%20Body%20Posture&rft.jtitle=Journal%20of%20computing%20in%20civil%20engineering&rft.au=Roberts,%20Dominic&rft.date=2020-07-01&rft.volume=34&rft.issue=4&rft.issn=0887-3801&rft.eissn=1943-5487&rft_id=info:doi/10.1061/(ASCE)CP.1943-5487.0000898&rft_dat=%3Cproquest_cross%3E2391179705%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2391179705&rft_id=info:pmid/&rfr_iscdi=true