Human Pose Estimation Method Based on Optimized Multi-scale Feature Fusion

Human pose estimation is the basis of many tasks in the field of computer vision. Due to the challenge of scale change, the previous human pose estimation network will lose pose information in the process of feature extraction, which makes it difficult to improve the accuracy of human pose estimatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Ji xie gong cheng xue bao 2024-01, Vol.60 (16), p.306
Hauptverfasser: Liu, Hongzhe, Tao, Xiangru, Xu, Cheng, Cao, Dongpu
Format: Artikel
Sprache:chi
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 16
container_start_page 306
container_title Ji xie gong cheng xue bao
container_volume 60
creator Liu, Hongzhe
Tao, Xiangru
Xu, Cheng
Cao, Dongpu
description Human pose estimation is the basis of many tasks in the field of computer vision. Due to the challenge of scale change, the previous human pose estimation network will lose pose information in the process of feature extraction, which makes it difficult to improve the accuracy of human pose estimation. To solve this problem, a parallel network combined with multi-scale feature fusion method is considered to extract features. The human posture estimation method for optimizing feature extraction is divided into two steps: firstly, in the multi-scale feature fusion stage, transpose convolution and mixed dilated convolution are used to reduce the loss of feature information. Secondly, in the feature map output stage, weighted feature maps of different scales are combined to eliminate redundant information, retain posture information, and generate higher quality high-resolution heat map at the same time. Experiments show that the accuracy of this method is improved by 2.1% compared with the advanced method HRnet(Hi
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3142327336</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3142327336</sourcerecordid><originalsourceid>FETCH-proquest_journals_31423273363</originalsourceid><addsrcrecordid>eNqNjrEKwjAURTMoWLT_EHAutE2bOistRSg6uJfUPjHSJjUvWfx63-AHOF0O5164KxalZVUlUh7khsWIekgzkVd5WRYRO7dhVoZfLQKv0etZeW0N78A_7ciPCmHkxJeFlP4QdGHyOsG7moA3oHxwlAFptGPrh5oQ4l9u2b6pb6c2WZx9B0Dfv2xwhlQvsiKnC0JI8V_rC7HuPHg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3142327336</pqid></control><display><type>article</type><title>Human Pose Estimation Method Based on Optimized Multi-scale Feature Fusion</title><source>Alma/SFX Local Collection</source><creator>Liu, Hongzhe ; Tao, Xiangru ; Xu, Cheng ; Cao, Dongpu</creator><creatorcontrib>Liu, Hongzhe ; Tao, Xiangru ; Xu, Cheng ; Cao, Dongpu</creatorcontrib><description>Human pose estimation is the basis of many tasks in the field of computer vision. Due to the challenge of scale change, the previous human pose estimation network will lose pose information in the process of feature extraction, which makes it difficult to improve the accuracy of human pose estimation. To solve this problem, a parallel network combined with multi-scale feature fusion method is considered to extract features. The human posture estimation method for optimizing feature extraction is divided into two steps: firstly, in the multi-scale feature fusion stage, transpose convolution and mixed dilated convolution are used to reduce the loss of feature information. Secondly, in the feature map output stage, weighted feature maps of different scales are combined to eliminate redundant information, retain posture information, and generate higher quality high-resolution heat map at the same time. Experiments show that the accuracy of this method is improved by 2.1% compared with the advanced method HRnet(Hi</description><identifier>ISSN: 0577-6686</identifier><language>chi</language><publisher>Beijing: Chinese Mechanical Engineering Society (CMES)</publisher><subject>Accuracy ; Computer vision ; Convolution ; Feature extraction ; Feature maps ; High resolution ; Pose estimation</subject><ispartof>Ji xie gong cheng xue bao, 2024-01, Vol.60 (16), p.306</ispartof><rights>Copyright Chinese Mechanical Engineering Society (CMES) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780</link.rule.ids></links><search><creatorcontrib>Liu, Hongzhe</creatorcontrib><creatorcontrib>Tao, Xiangru</creatorcontrib><creatorcontrib>Xu, Cheng</creatorcontrib><creatorcontrib>Cao, Dongpu</creatorcontrib><title>Human Pose Estimation Method Based on Optimized Multi-scale Feature Fusion</title><title>Ji xie gong cheng xue bao</title><description>Human pose estimation is the basis of many tasks in the field of computer vision. Due to the challenge of scale change, the previous human pose estimation network will lose pose information in the process of feature extraction, which makes it difficult to improve the accuracy of human pose estimation. To solve this problem, a parallel network combined with multi-scale feature fusion method is considered to extract features. The human posture estimation method for optimizing feature extraction is divided into two steps: firstly, in the multi-scale feature fusion stage, transpose convolution and mixed dilated convolution are used to reduce the loss of feature information. Secondly, in the feature map output stage, weighted feature maps of different scales are combined to eliminate redundant information, retain posture information, and generate higher quality high-resolution heat map at the same time. Experiments show that the accuracy of this method is improved by 2.1% compared with the advanced method HRnet(Hi</description><subject>Accuracy</subject><subject>Computer vision</subject><subject>Convolution</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>High resolution</subject><subject>Pose estimation</subject><issn>0577-6686</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNqNjrEKwjAURTMoWLT_EHAutE2bOistRSg6uJfUPjHSJjUvWfx63-AHOF0O5164KxalZVUlUh7khsWIekgzkVd5WRYRO7dhVoZfLQKv0etZeW0N78A_7ciPCmHkxJeFlP4QdGHyOsG7moA3oHxwlAFptGPrh5oQ4l9u2b6pb6c2WZx9B0Dfv2xwhlQvsiKnC0JI8V_rC7HuPHg</recordid><startdate>20240101</startdate><enddate>20240101</enddate><creator>Liu, Hongzhe</creator><creator>Tao, Xiangru</creator><creator>Xu, Cheng</creator><creator>Cao, Dongpu</creator><general>Chinese Mechanical Engineering Society (CMES)</general><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>H8D</scope><scope>L7M</scope></search><sort><creationdate>20240101</creationdate><title>Human Pose Estimation Method Based on Optimized Multi-scale Feature Fusion</title><author>Liu, Hongzhe ; Tao, Xiangru ; Xu, Cheng ; Cao, Dongpu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31423273363</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>chi</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Computer vision</topic><topic>Convolution</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>High resolution</topic><topic>Pose estimation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Hongzhe</creatorcontrib><creatorcontrib>Tao, Xiangru</creatorcontrib><creatorcontrib>Xu, Cheng</creatorcontrib><creatorcontrib>Cao, Dongpu</creatorcontrib><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>Ji xie gong cheng xue bao</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Hongzhe</au><au>Tao, Xiangru</au><au>Xu, Cheng</au><au>Cao, Dongpu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Human Pose Estimation Method Based on Optimized Multi-scale Feature Fusion</atitle><jtitle>Ji xie gong cheng xue bao</jtitle><date>2024-01-01</date><risdate>2024</risdate><volume>60</volume><issue>16</issue><spage>306</spage><pages>306-</pages><issn>0577-6686</issn><abstract>Human pose estimation is the basis of many tasks in the field of computer vision. Due to the challenge of scale change, the previous human pose estimation network will lose pose information in the process of feature extraction, which makes it difficult to improve the accuracy of human pose estimation. To solve this problem, a parallel network combined with multi-scale feature fusion method is considered to extract features. The human posture estimation method for optimizing feature extraction is divided into two steps: firstly, in the multi-scale feature fusion stage, transpose convolution and mixed dilated convolution are used to reduce the loss of feature information. Secondly, in the feature map output stage, weighted feature maps of different scales are combined to eliminate redundant information, retain posture information, and generate higher quality high-resolution heat map at the same time. Experiments show that the accuracy of this method is improved by 2.1% compared with the advanced method HRnet(Hi</abstract><cop>Beijing</cop><pub>Chinese Mechanical Engineering Society (CMES)</pub></addata></record>
fulltext fulltext
identifier ISSN: 0577-6686
ispartof Ji xie gong cheng xue bao, 2024-01, Vol.60 (16), p.306
issn 0577-6686
language chi
recordid cdi_proquest_journals_3142327336
source Alma/SFX Local Collection
subjects Accuracy
Computer vision
Convolution
Feature extraction
Feature maps
High resolution
Pose estimation
title Human Pose Estimation Method Based on Optimized Multi-scale Feature Fusion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T18%3A05%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Human%20Pose%20Estimation%20Method%20Based%20on%20Optimized%20Multi-scale%20Feature%20Fusion&rft.jtitle=Ji%20xie%20gong%20cheng%20xue%20bao&rft.au=Liu,%20Hongzhe&rft.date=2024-01-01&rft.volume=60&rft.issue=16&rft.spage=306&rft.pages=306-&rft.issn=0577-6686&rft_id=info:doi/&rft_dat=%3Cproquest%3E3142327336%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3142327336&rft_id=info:pmid/&rfr_iscdi=true