IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM

To appropriately simulate a state of the face to which cosmetics are applied according to types of facial parts.SOLUTION: An image acquisition unit 102 acquires a first facial image and a second facial image. A display unit 104 displays the first facial image and the second facial image. A type disc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: NAKANO KANAKO
Format: Patent
Sprache:eng ; jpn
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator NAKANO KANAKO
description To appropriately simulate a state of the face to which cosmetics are applied according to types of facial parts.SOLUTION: An image acquisition unit 102 acquires a first facial image and a second facial image. A display unit 104 displays the first facial image and the second facial image. A type discrimination unit 110 discriminates types of facial parts of a user on the basis of at least one of the first facial image and the second facial image. An area determination unit 113 determines a first makeup area on the basis of a makeup action that the user performs when the facial parts are in a first state, and determines a second makeup area on the basis of the makeup action and shapes of the facial parts.SELECTED DRAWING: Figure 4 【課題】化粧品が塗布された顔の状態を顔パーツの種別に応じて適切にシミュレーションする。【解決手段】画像取得部102は、第1の顔画像と第2の顔画像とを取得する。表示部104は、第1の顔画像と第2の顔画像とを表示する。種別判別部110は、第1の顔画像と第2の顔画像とのうちの少なくとも一方に基づいて、ユーザの顔パーツの種別を判別する。領域決定部113は、顔パーツが第1の状態であるときにユーザが実行したメイク動作に基づいて第1のメイク領域を決定し、メイク動作と顔パーツの形状とに基づいて第2のメイク領域を決定する。【選択図】図4
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_JP2023164494A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>JP2023164494A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_JP2023164494A3</originalsourceid><addsrcrecordid>eNrjZLD19HV0d1UICPJ3dg0O9vRzV3BxDfN0dtVRwJDwdQ3x8HfRUXD0cwEJuwc5-vIwsKYl5hSn8kJpbgYlN9cQZw_d1IL8-NTigsTk1LzUknivACMDI2NDMxMTSxNHY6IUAQBX9yny</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM</title><source>esp@cenet</source><creator>NAKANO KANAKO</creator><creatorcontrib>NAKANO KANAKO</creatorcontrib><description>To appropriately simulate a state of the face to which cosmetics are applied according to types of facial parts.SOLUTION: An image acquisition unit 102 acquires a first facial image and a second facial image. A display unit 104 displays the first facial image and the second facial image. A type discrimination unit 110 discriminates types of facial parts of a user on the basis of at least one of the first facial image and the second facial image. An area determination unit 113 determines a first makeup area on the basis of a makeup action that the user performs when the facial parts are in a first state, and determines a second makeup area on the basis of the makeup action and shapes of the facial parts.SELECTED DRAWING: Figure 4 【課題】化粧品が塗布された顔の状態を顔パーツの種別に応じて適切にシミュレーションする。【解決手段】画像取得部102は、第1の顔画像と第2の顔画像とを取得する。表示部104は、第1の顔画像と第2の顔画像とを表示する。種別判別部110は、第1の顔画像と第2の顔画像とのうちの少なくとも一方に基づいて、ユーザの顔パーツの種別を判別する。領域決定部113は、顔パーツが第1の状態であるときにユーザが実行したメイク動作に基づいて第1のメイク領域を決定し、メイク動作と顔パーツの形状とに基づいて第2のメイク領域を決定する。【選択図】図4</description><language>eng ; jpn</language><subject>CALCULATING ; COMPUTING ; COUNTING ; HAIRDRESSING OR SHAVING EQUIPMENT ; HAND OR TRAVELLING ARTICLES ; HUMAN NECESSITIES ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; MANICURING OR OTHER COSMETIC TREATMENT ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20231110&amp;DB=EPODOC&amp;CC=JP&amp;NR=2023164494A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20231110&amp;DB=EPODOC&amp;CC=JP&amp;NR=2023164494A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>NAKANO KANAKO</creatorcontrib><title>IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM</title><description>To appropriately simulate a state of the face to which cosmetics are applied according to types of facial parts.SOLUTION: An image acquisition unit 102 acquires a first facial image and a second facial image. A display unit 104 displays the first facial image and the second facial image. A type discrimination unit 110 discriminates types of facial parts of a user on the basis of at least one of the first facial image and the second facial image. An area determination unit 113 determines a first makeup area on the basis of a makeup action that the user performs when the facial parts are in a first state, and determines a second makeup area on the basis of the makeup action and shapes of the facial parts.SELECTED DRAWING: Figure 4 【課題】化粧品が塗布された顔の状態を顔パーツの種別に応じて適切にシミュレーションする。【解決手段】画像取得部102は、第1の顔画像と第2の顔画像とを取得する。表示部104は、第1の顔画像と第2の顔画像とを表示する。種別判別部110は、第1の顔画像と第2の顔画像とのうちの少なくとも一方に基づいて、ユーザの顔パーツの種別を判別する。領域決定部113は、顔パーツが第1の状態であるときにユーザが実行したメイク動作に基づいて第1のメイク領域を決定し、メイク動作と顔パーツの形状とに基づいて第2のメイク領域を決定する。【選択図】図4</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>HAIRDRESSING OR SHAVING EQUIPMENT</subject><subject>HAND OR TRAVELLING ARTICLES</subject><subject>HUMAN NECESSITIES</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>MANICURING OR OTHER COSMETIC TREATMENT</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZLD19HV0d1UICPJ3dg0O9vRzV3BxDfN0dtVRwJDwdQ3x8HfRUXD0cwEJuwc5-vIwsKYl5hSn8kJpbgYlN9cQZw_d1IL8-NTigsTk1LzUknivACMDI2NDMxMTSxNHY6IUAQBX9yny</recordid><startdate>20231110</startdate><enddate>20231110</enddate><creator>NAKANO KANAKO</creator><scope>EVB</scope></search><sort><creationdate>20231110</creationdate><title>IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM</title><author>NAKANO KANAKO</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_JP2023164494A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; jpn</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>HAIRDRESSING OR SHAVING EQUIPMENT</topic><topic>HAND OR TRAVELLING ARTICLES</topic><topic>HUMAN NECESSITIES</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>MANICURING OR OTHER COSMETIC TREATMENT</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>NAKANO KANAKO</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>NAKANO KANAKO</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM</title><date>2023-11-10</date><risdate>2023</risdate><abstract>To appropriately simulate a state of the face to which cosmetics are applied according to types of facial parts.SOLUTION: An image acquisition unit 102 acquires a first facial image and a second facial image. A display unit 104 displays the first facial image and the second facial image. A type discrimination unit 110 discriminates types of facial parts of a user on the basis of at least one of the first facial image and the second facial image. An area determination unit 113 determines a first makeup area on the basis of a makeup action that the user performs when the facial parts are in a first state, and determines a second makeup area on the basis of the makeup action and shapes of the facial parts.SELECTED DRAWING: Figure 4 【課題】化粧品が塗布された顔の状態を顔パーツの種別に応じて適切にシミュレーションする。【解決手段】画像取得部102は、第1の顔画像と第2の顔画像とを取得する。表示部104は、第1の顔画像と第2の顔画像とを表示する。種別判別部110は、第1の顔画像と第2の顔画像とのうちの少なくとも一方に基づいて、ユーザの顔パーツの種別を判別する。領域決定部113は、顔パーツが第1の状態であるときにユーザが実行したメイク動作に基づいて第1のメイク領域を決定し、メイク動作と顔パーツの形状とに基づいて第2のメイク領域を決定する。【選択図】図4</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng ; jpn
recordid cdi_epo_espacenet_JP2023164494A
source esp@cenet
subjects CALCULATING
COMPUTING
COUNTING
HAIRDRESSING OR SHAVING EQUIPMENT
HAND OR TRAVELLING ARTICLES
HUMAN NECESSITIES
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
MANICURING OR OTHER COSMETIC TREATMENT
PHYSICS
title IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T09%3A43%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=NAKANO%20KANAKO&rft.date=2023-11-10&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EJP2023164494A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true