Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding
The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automa...
Gespeichert in:
Veröffentlicht in: | Psychophysiology 1999-01, Vol.36 (1), p.35-43 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The face is a rich source of information about
human behavior. Available methods for coding facial displays,
however, are human-observer dependent, labor intensive,
and difficult to standardize. To enable rigorous and efficient
quantitative measurement of facial displays, we have developed
an automated method of facial display analysis. In this
report, we compare the results with this automated system
with those of manual FACS (Facial Action Coding System,
Ekman & Friesen, 1978a) coding. One hundred university
students were videotaped while performing a series of facial
displays. The image sequences were coded from videotape
by certified FACS coders. Fifteen action units and action
unit combinations that occurred a minimum of 25 times were
selected for automated analysis. Facial features were automatically
tracked in digitized image sequences using a hierarchical
algorithm for estimating optical flow. The measurements
were normalized for variation in position, orientation,
and scale. The image sequences were randomly divided into
a training set and a cross-validation set, and discriminant
function analyses were conducted on the feature point measurements.
In the training set, average agreement with manual FACS
coding was 92% or higher for action units in the brow,
eye, and mouth regions. In the cross-validation set, average
agreement was 91%, 88%, and 81% for action units in the
brow, eye, and mouth regions, respectively. Automated face
analysis by feature point tracking demonstrated high concurrent
validity with manual FACS coding. |
---|---|
ISSN: | 0048-5772 1469-8986 1540-5958 |
DOI: | 10.1017/S0048577299971184 |