High-precision face key point positioning method and system based on deep learning

The invention discloses a high-precision face key point positioning method and system based on deep learning. The positioning method comprises the following steps: S1, constructing a plurality of regional key point positioning networks; S2, through the portrait area and the key point sample data cor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: HU NENG, YANG JINJIANG, DAI KANKAN, LI YUNXI
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator HU NENG
YANG JINJIANG
DAI KANKAN
LI YUNXI
description The invention discloses a high-precision face key point positioning method and system based on deep learning. The positioning method comprises the following steps: S1, constructing a plurality of regional key point positioning networks; S2, through the portrait area and the key point sample data corresponding to each area, training a corresponding area key point positioning network; S3, segmentingthe to-be-processed face image into human image regions; S4, based on the processing task type of the face image, selecting portrait areas needing to be processed, and inputting the portrait areas into the corresponding key point positioning networks to obtain key points corresponding to the processing task; and S5, integrating and outputting the key points corresponding to the processing task and the face image. According to the invention, the human face is divided into a plurality of regions for independently positioning the key points, when one or more parts are shielded, the accuracy andstability of the key point
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN111209873A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN111209873A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN111209873A3</originalsourceid><addsrcrecordid>eNrjZAjyyEzP0C0oSk3OLM7Mz1NIS0xOVchOrVQoyM_MKwGSxZklQPHMvHSF3NSSjPwUhcS8FIXiyuKS1FyFpMTi1BQFoK6U1NQChZzUxCKQQh4G1rTEnOJUXijNzaDo5hri7KGbWpAfn1pcALQhL7Uk3tnP0NDQyMDSwtzY0ZgYNQAI5Tfq</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>High-precision face key point positioning method and system based on deep learning</title><source>esp@cenet</source><creator>HU NENG ; YANG JINJIANG ; DAI KANKAN ; LI YUNXI</creator><creatorcontrib>HU NENG ; YANG JINJIANG ; DAI KANKAN ; LI YUNXI</creatorcontrib><description>The invention discloses a high-precision face key point positioning method and system based on deep learning. The positioning method comprises the following steps: S1, constructing a plurality of regional key point positioning networks; S2, through the portrait area and the key point sample data corresponding to each area, training a corresponding area key point positioning network; S3, segmentingthe to-be-processed face image into human image regions; S4, based on the processing task type of the face image, selecting portrait areas needing to be processed, and inputting the portrait areas into the corresponding key point positioning networks to obtain key points corresponding to the processing task; and S5, integrating and outputting the key points corresponding to the processing task and the face image. According to the invention, the human face is divided into a plurality of regions for independently positioning the key points, when one or more parts are shielded, the accuracy andstability of the key point</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; HANDLING RECORD CARRIERS ; PHYSICS ; PRESENTATION OF DATA ; RECOGNITION OF DATA ; RECORD CARRIERS</subject><creationdate>2020</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200529&amp;DB=EPODOC&amp;CC=CN&amp;NR=111209873A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76294</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200529&amp;DB=EPODOC&amp;CC=CN&amp;NR=111209873A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>HU NENG</creatorcontrib><creatorcontrib>YANG JINJIANG</creatorcontrib><creatorcontrib>DAI KANKAN</creatorcontrib><creatorcontrib>LI YUNXI</creatorcontrib><title>High-precision face key point positioning method and system based on deep learning</title><description>The invention discloses a high-precision face key point positioning method and system based on deep learning. The positioning method comprises the following steps: S1, constructing a plurality of regional key point positioning networks; S2, through the portrait area and the key point sample data corresponding to each area, training a corresponding area key point positioning network; S3, segmentingthe to-be-processed face image into human image regions; S4, based on the processing task type of the face image, selecting portrait areas needing to be processed, and inputting the portrait areas into the corresponding key point positioning networks to obtain key points corresponding to the processing task; and S5, integrating and outputting the key points corresponding to the processing task and the face image. According to the invention, the human face is divided into a plurality of regions for independently positioning the key points, when one or more parts are shielded, the accuracy andstability of the key point</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>HANDLING RECORD CARRIERS</subject><subject>PHYSICS</subject><subject>PRESENTATION OF DATA</subject><subject>RECOGNITION OF DATA</subject><subject>RECORD CARRIERS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2020</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZAjyyEzP0C0oSk3OLM7Mz1NIS0xOVchOrVQoyM_MKwGSxZklQPHMvHSF3NSSjPwUhcS8FIXiyuKS1FyFpMTi1BQFoK6U1NQChZzUxCKQQh4G1rTEnOJUXijNzaDo5hri7KGbWpAfn1pcALQhL7Uk3tnP0NDQyMDSwtzY0ZgYNQAI5Tfq</recordid><startdate>20200529</startdate><enddate>20200529</enddate><creator>HU NENG</creator><creator>YANG JINJIANG</creator><creator>DAI KANKAN</creator><creator>LI YUNXI</creator><scope>EVB</scope></search><sort><creationdate>20200529</creationdate><title>High-precision face key point positioning method and system based on deep learning</title><author>HU NENG ; YANG JINJIANG ; DAI KANKAN ; LI YUNXI</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN111209873A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2020</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>HANDLING RECORD CARRIERS</topic><topic>PHYSICS</topic><topic>PRESENTATION OF DATA</topic><topic>RECOGNITION OF DATA</topic><topic>RECORD CARRIERS</topic><toplevel>online_resources</toplevel><creatorcontrib>HU NENG</creatorcontrib><creatorcontrib>YANG JINJIANG</creatorcontrib><creatorcontrib>DAI KANKAN</creatorcontrib><creatorcontrib>LI YUNXI</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>HU NENG</au><au>YANG JINJIANG</au><au>DAI KANKAN</au><au>LI YUNXI</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>High-precision face key point positioning method and system based on deep learning</title><date>2020-05-29</date><risdate>2020</risdate><abstract>The invention discloses a high-precision face key point positioning method and system based on deep learning. The positioning method comprises the following steps: S1, constructing a plurality of regional key point positioning networks; S2, through the portrait area and the key point sample data corresponding to each area, training a corresponding area key point positioning network; S3, segmentingthe to-be-processed face image into human image regions; S4, based on the processing task type of the face image, selecting portrait areas needing to be processed, and inputting the portrait areas into the corresponding key point positioning networks to obtain key points corresponding to the processing task; and S5, integrating and outputting the key points corresponding to the processing task and the face image. According to the invention, the human face is divided into a plurality of regions for independently positioning the key points, when one or more parts are shielded, the accuracy andstability of the key point</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN111209873A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
HANDLING RECORD CARRIERS
PHYSICS
PRESENTATION OF DATA
RECOGNITION OF DATA
RECORD CARRIERS
title High-precision face key point positioning method and system based on deep learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T11%3A05%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=HU%20NENG&rft.date=2020-05-29&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN111209873A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true