April 5, 2017

Gender Recognition Dataset

Dataset

These videos and images have been used in order to test our methods for gender recognition in different scientific works.

UNISA-Public dataset is composed by 12 video sequences acquired in the University of Salerno in real environments. It was collected with the purpose of evaluating gender recognition algorithms in real life scenarios, since the images extracted from videos are surely more challenging than the ones provided by the classic datasets. We also provide the face images extracted with OpenCV and Matlab.

GENDER-FERET dataset is a balanced subset of the FERET dataset, adapted for gender recogntion purposes. It consists of 946 grayscale images, already divided in training set (237 m, 237 f) and test set (236 m, 236 f).

GENDER-COLOR-FERET dataset is a balanced subset of the COLOR-FERET dataset, adapted for gender recogntion purposes. In this case the images are coloured and the dataset is composed by 836 faces. The dataset is completely balanced, since both the training and the test set are composed of 209 male and 209 female faces.


References

If you use these datasets please cite:

  • Gender recognition from face images using a fusion of svm classifiers.

    The recognition of gender from face images is an important application, especially in the fields of security, marketing and intelligent user interfaces. We propose an approach to gender recognition from faces by fusing the decisions of SVM classifiers. Each classifier is trained with different types of features, namely HOG (shape), LBP (texture) and raw pixel values. For the latter features we use an SVM with a linear kernel and for the two former ones we use SVMs with histogram intersection kernels. We come to a decision by fusing the three classifiers with a majority vote. We demonstrate the effectiveness of our approach on a new dataset that we extract from FERET. We achieve an accuracy of 92.6 %, which outperforms the commercial products Face++ and Luxand.

  • Gender recognition from face images with trainable COSFIRE filters.

    Gender recognition from face images is an important application in the fields of security, retail advertising and marketing. We propose a novel descriptor based on COSFIRE filters for gender recognition. A COSFIRE filter is trainable, in that its selectivity is determined in an automatic configuration process that analyses a given prototype pattern of interest. We demonstrate the effectiveness of the proposed approach on a new dataset called GENDER-FERET with 474 training and 472 test samples and achieve an accuracy rate of 93.7%. It also outperforms an approach that relies on handcrafted features and an ensemble of classifiers. Furthermore, we perform another experiment by using the images of the Labeled Faces in the Wild (LFW) dataset to train our classifier and the test images of the GENDER-FERET dataset for evaluation. This experiment demonstrates the generalization ability of the proposed approach and it also outperforms two commercial libraries, namely Face++ and Luxand.


Download

In order to download the datasets click here.


Help

If you have any problems, do not hesitate to contact us here.