Automatic location and tracking of the facial region in color video sequences

A novel technique is introduced to locate and track the facial area in videophone-type sequences. The proposed method essentially consists of two components: (i) a color processing unit, and (ii) a knowledge-based shape and color analysis module. The color processing component utilizes the distribut...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal processing. Image communication 1999, Vol.14 (5), p.359-388
Hauptverfasser: Herodotou, N., Plataniotis, K.N., Venetsanopoulos, A.N.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 388
container_issue 5
container_start_page 359
container_title Signal processing. Image communication
container_volume 14
creator Herodotou, N.
Plataniotis, K.N.
Venetsanopoulos, A.N.
description A novel technique is introduced to locate and track the facial area in videophone-type sequences. The proposed method essentially consists of two components: (i) a color processing unit, and (ii) a knowledge-based shape and color analysis module. The color processing component utilizes the distribution of skin-tones in the HSV color space to obtain an initial set of candidate regions or objects. The second component in the segmentation scheme, that is, the shape and color analysis module is used to correctly identify and select the facial region in the case where more than one object has been extracted. A number of fuzzy membership functions are devised to provide information about each object’s shape, orientation, location and average hue. An aggregation operator finally combines these measures and correctly selects the facial area. The suggested approach is robust with regard to different skin types, and various types of object or background motion within the scene. Furthermore, the algorithm can be implemented at a low computational complexity due to the binary nature of the operations involved. Experimental results are presented for a series of CIF and QCIF video sequences.
doi_str_mv 10.1016/S0923-5965(98)00018-6
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_27174282</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0923596598000186</els_id><sourcerecordid>27174282</sourcerecordid><originalsourceid>FETCH-LOGICAL-c509t-db77f29b6dac5953798fe4736136fef733a198c340160a06052934e06e79125e3</originalsourceid><addsrcrecordid>eNqFkEtLAzEUhYMoWKs_QchCRBejeUySyUqK-IKKC3Ud0sxNjU4nNZkW_PfO2KJLV2fz3Xs4H0LHlFxQQuXlM9GMF0JLcaarc0IIrQq5g0a0UrpgUqldNPpF9tFBzu89xEqiR-hxsuriwnbB4Sa6PmOLbVvjLln3Edo5jh53b4C9dcE2OMF8IEKLXWxiwutQQ8QZPlfQOsiHaM_bJsPRNsfo9fbm5fq-mD7dPVxPpoUTRHdFPVPKMz2TtXVCC6505aFUXFIuPXjFuaW6crzs1xFLJBFM8xKIBKUpE8DH6HTzd5liX507swjZQdPYFuIqG6aoKlnFelBsQJdizgm8WaawsOnLUGIGeeZHnhnMGF2ZH3lG9ncn2wKbnW18sq0L-e9YcUHlgF1tMOjHrgMkk10YTNQhgetMHcM_Rd8mZIJI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>27174282</pqid></control><display><type>article</type><title>Automatic location and tracking of the facial region in color video sequences</title><source>Access via ScienceDirect (Elsevier)</source><creator>Herodotou, N. ; Plataniotis, K.N. ; Venetsanopoulos, A.N.</creator><creatorcontrib>Herodotou, N. ; Plataniotis, K.N. ; Venetsanopoulos, A.N.</creatorcontrib><description>A novel technique is introduced to locate and track the facial area in videophone-type sequences. The proposed method essentially consists of two components: (i) a color processing unit, and (ii) a knowledge-based shape and color analysis module. The color processing component utilizes the distribution of skin-tones in the HSV color space to obtain an initial set of candidate regions or objects. The second component in the segmentation scheme, that is, the shape and color analysis module is used to correctly identify and select the facial region in the case where more than one object has been extracted. A number of fuzzy membership functions are devised to provide information about each object’s shape, orientation, location and average hue. An aggregation operator finally combines these measures and correctly selects the facial area. The suggested approach is robust with regard to different skin types, and various types of object or background motion within the scene. Furthermore, the algorithm can be implemented at a low computational complexity due to the binary nature of the operations involved. Experimental results are presented for a series of CIF and QCIF video sequences.</description><identifier>ISSN: 0923-5965</identifier><identifier>EISSN: 1879-2677</identifier><identifier>DOI: 10.1016/S0923-5965(98)00018-6</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>Applied sciences ; Color image segmentation ; Exact sciences and technology ; Face localization and tracking ; Facial features ; Fuzzy membership functions ; Image processing ; Information, signal and communications theory ; Signal processing ; Telecommunications and information theory ; Video processing</subject><ispartof>Signal processing. Image communication, 1999, Vol.14 (5), p.359-388</ispartof><rights>1999 Elsevier Science B.V.</rights><rights>1999 INIST-CNRS</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c509t-db77f29b6dac5953798fe4736136fef733a198c340160a06052934e06e79125e3</citedby><cites>FETCH-LOGICAL-c509t-db77f29b6dac5953798fe4736136fef733a198c340160a06052934e06e79125e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/S0923-5965(98)00018-6$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>315,781,785,3551,4025,27928,27929,27930,46000</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=1735166$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Herodotou, N.</creatorcontrib><creatorcontrib>Plataniotis, K.N.</creatorcontrib><creatorcontrib>Venetsanopoulos, A.N.</creatorcontrib><title>Automatic location and tracking of the facial region in color video sequences</title><title>Signal processing. Image communication</title><description>A novel technique is introduced to locate and track the facial area in videophone-type sequences. The proposed method essentially consists of two components: (i) a color processing unit, and (ii) a knowledge-based shape and color analysis module. The color processing component utilizes the distribution of skin-tones in the HSV color space to obtain an initial set of candidate regions or objects. The second component in the segmentation scheme, that is, the shape and color analysis module is used to correctly identify and select the facial region in the case where more than one object has been extracted. A number of fuzzy membership functions are devised to provide information about each object’s shape, orientation, location and average hue. An aggregation operator finally combines these measures and correctly selects the facial area. The suggested approach is robust with regard to different skin types, and various types of object or background motion within the scene. Furthermore, the algorithm can be implemented at a low computational complexity due to the binary nature of the operations involved. Experimental results are presented for a series of CIF and QCIF video sequences.</description><subject>Applied sciences</subject><subject>Color image segmentation</subject><subject>Exact sciences and technology</subject><subject>Face localization and tracking</subject><subject>Facial features</subject><subject>Fuzzy membership functions</subject><subject>Image processing</subject><subject>Information, signal and communications theory</subject><subject>Signal processing</subject><subject>Telecommunications and information theory</subject><subject>Video processing</subject><issn>0923-5965</issn><issn>1879-2677</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1999</creationdate><recordtype>article</recordtype><recordid>eNqFkEtLAzEUhYMoWKs_QchCRBejeUySyUqK-IKKC3Ud0sxNjU4nNZkW_PfO2KJLV2fz3Xs4H0LHlFxQQuXlM9GMF0JLcaarc0IIrQq5g0a0UrpgUqldNPpF9tFBzu89xEqiR-hxsuriwnbB4Sa6PmOLbVvjLln3Edo5jh53b4C9dcE2OMF8IEKLXWxiwutQQ8QZPlfQOsiHaM_bJsPRNsfo9fbm5fq-mD7dPVxPpoUTRHdFPVPKMz2TtXVCC6505aFUXFIuPXjFuaW6crzs1xFLJBFM8xKIBKUpE8DH6HTzd5liX507swjZQdPYFuIqG6aoKlnFelBsQJdizgm8WaawsOnLUGIGeeZHnhnMGF2ZH3lG9ncn2wKbnW18sq0L-e9YcUHlgF1tMOjHrgMkk10YTNQhgetMHcM_Rd8mZIJI</recordid><startdate>1999</startdate><enddate>1999</enddate><creator>Herodotou, N.</creator><creator>Plataniotis, K.N.</creator><creator>Venetsanopoulos, A.N.</creator><general>Elsevier B.V</general><general>Elsevier</general><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>1999</creationdate><title>Automatic location and tracking of the facial region in color video sequences</title><author>Herodotou, N. ; Plataniotis, K.N. ; Venetsanopoulos, A.N.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c509t-db77f29b6dac5953798fe4736136fef733a198c340160a06052934e06e79125e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1999</creationdate><topic>Applied sciences</topic><topic>Color image segmentation</topic><topic>Exact sciences and technology</topic><topic>Face localization and tracking</topic><topic>Facial features</topic><topic>Fuzzy membership functions</topic><topic>Image processing</topic><topic>Information, signal and communications theory</topic><topic>Signal processing</topic><topic>Telecommunications and information theory</topic><topic>Video processing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Herodotou, N.</creatorcontrib><creatorcontrib>Plataniotis, K.N.</creatorcontrib><creatorcontrib>Venetsanopoulos, A.N.</creatorcontrib><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Signal processing. Image communication</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Herodotou, N.</au><au>Plataniotis, K.N.</au><au>Venetsanopoulos, A.N.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Automatic location and tracking of the facial region in color video sequences</atitle><jtitle>Signal processing. Image communication</jtitle><date>1999</date><risdate>1999</risdate><volume>14</volume><issue>5</issue><spage>359</spage><epage>388</epage><pages>359-388</pages><issn>0923-5965</issn><eissn>1879-2677</eissn><abstract>A novel technique is introduced to locate and track the facial area in videophone-type sequences. The proposed method essentially consists of two components: (i) a color processing unit, and (ii) a knowledge-based shape and color analysis module. The color processing component utilizes the distribution of skin-tones in the HSV color space to obtain an initial set of candidate regions or objects. The second component in the segmentation scheme, that is, the shape and color analysis module is used to correctly identify and select the facial region in the case where more than one object has been extracted. A number of fuzzy membership functions are devised to provide information about each object’s shape, orientation, location and average hue. An aggregation operator finally combines these measures and correctly selects the facial area. The suggested approach is robust with regard to different skin types, and various types of object or background motion within the scene. Furthermore, the algorithm can be implemented at a low computational complexity due to the binary nature of the operations involved. Experimental results are presented for a series of CIF and QCIF video sequences.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/S0923-5965(98)00018-6</doi><tpages>30</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0923-5965
ispartof Signal processing. Image communication, 1999, Vol.14 (5), p.359-388
issn 0923-5965
1879-2677
language eng
recordid cdi_proquest_miscellaneous_27174282
source Access via ScienceDirect (Elsevier)
subjects Applied sciences
Color image segmentation
Exact sciences and technology
Face localization and tracking
Facial features
Fuzzy membership functions
Image processing
Information, signal and communications theory
Signal processing
Telecommunications and information theory
Video processing
title Automatic location and tracking of the facial region in color video sequences
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T07%3A05%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Automatic%20location%20and%20tracking%20of%20the%20facial%20region%20in%20color%20video%20sequences&rft.jtitle=Signal%20processing.%20Image%20communication&rft.au=Herodotou,%20N.&rft.date=1999&rft.volume=14&rft.issue=5&rft.spage=359&rft.epage=388&rft.pages=359-388&rft.issn=0923-5965&rft.eissn=1879-2677&rft_id=info:doi/10.1016/S0923-5965(98)00018-6&rft_dat=%3Cproquest_cross%3E27174282%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=27174282&rft_id=info:pmid/&rft_els_id=S0923596598000186&rfr_iscdi=true