EarSegDB 25

This dataset is created for the purpose of automatic segmentation of the human ear in an unconstrained environment. It consists of 1275 ear images of both left and right ear from 25 different persons and 1275 manually created segmentation masks for each respective raw ear image. The raw image data i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Rahul Lahkar
Format: Dataset
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This dataset is created for the purpose of automatic segmentation of the human ear in an unconstrained environment. It consists of 1275 ear images of both left and right ear from 25 different persons and 1275 manually created segmentation masks for each respective raw ear image. The raw image data is divided into two parts- Train and Test. Four images from each person (two from right ear and two from left ear) are selected randomly and kept in the Test folder (100 images). The rests are kept in Train folder (1175 images). The masks are also divided in the same way as the respective raw ear images and kept in two folders as Trainannot (1175 images) and Testannot (100 images). The collected data is completely original, new and the samples are collected from students, teachers and employees of the Gauhati University, Guwahati, Asssam, India and one of its affiliated colleges (Pub Kamrup College, Baihata Chariali, Kamrup, Assam India). Data is collected from both Male and Female of age range 21-58 years using a smartphone camera with varying the angle and distance and having no control over the environment. The masks are manually created for each of the images keeping the ear part as white and the non-ear part as black. A Table with title "EarSegDB 25 info" is also there which carries additional information such as left and right ear images, age and gender of each person. This additional data could be useful for researchers to carry out further works such as age and gender prediction etc. using ear images.
DOI:10.17632/zp5c895yrg.1