Age Group and Gender Estimation in the Wild with Deep RoR Architecture
Automatically predicting age group and gender from face images acquired in unconstrained conditions is an important and challenging task in many real-world applications. Nevertheless, the conventional methods with manually-designed features on in-the-wild benchmarks are unsatisfactory because of inc...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automatically predicting age group and gender from face images acquired in
unconstrained conditions is an important and challenging task in many
real-world applications. Nevertheless, the conventional methods with
manually-designed features on in-the-wild benchmarks are unsatisfactory because
of incompetency to tackle large variations in unconstrained images. This
difficulty is alleviated to some degree through Convolutional Neural Networks
(CNN) for its powerful feature representation. In this paper, we propose a new
CNN based method for age group and gender estimation leveraging Residual
Networks of Residual Networks (RoR), which exhibits better optimization ability
for age group and gender classification than other CNN architectures.Moreover,
two modest mechanisms based on observation of the characteristics of age group
are presented to further improve the performance of age estimation.In order to
further improve the performance and alleviate over-fitting problem, RoR model
is pre-trained on ImageNet firstly, and then it is fune-tuned on the
IMDB-WIKI-101 data set for further learning the features of face images,
finally, it is used to fine-tune on Adience data set. Our experiments
illustrate the effectiveness of RoR method for age and gender estimation in the
wild, where it achieves better performance than other CNN methods. Finally, the
RoR-152+IMDB-WIKI-101 with two mechanisms achieves new state-of-the-art results
on Adience benchmark. |
---|---|
DOI: | 10.48550/arxiv.1710.02985 |