Bobeldyk and Ross (2016) investigated the gender prediction based on ocular images collected in NIR spectrum. The purpose of this work was to explore Iris or ocular region for gender classification. The work focused on classifying gender using four different regions: only iris region, normalised iris-only region, ocular region, and iris occluded ocular region. In the context of this work, a statistical feature extraction method (BSIF) along with SVM classifier was employed to predict gender using BioCOP database (BioCOP, Database Available Online) collected using NIR sensor. The experimental evaluation results have demonstrated the better classification accuracy using the ocular region compared to the iris region for gender prediction.
The effect of cosmetics on eyelashes was examined for gender classification by Kuehlkamp et al. (2017). The authors used mascara on the subjects such that the eyelashes appeared more thicker and darker, to increase the artefacts to make the extraction of the texture details challenging. The works have explored the use of hand-crafted features such as LBP and Gabor filter and data-driven features (raw pixel intensity), while the MLPs and CNNs as the classification approach used in their work. The result of this work based on Gender from Iris (GFI) dataset (Tapia et al., 2016) indicated 66.00% ocular gender classification accuracy.
Tapia et al. (2018) have also considered the usefulness of CNN in providing the competitive gender prediction results for the ocular region, rather than relying on textural information. The authors demonstrated the results by training the CNN model on left and right eyes and merging the models of left and right eye to explore the benefits of merging two models. The experimental classification accuracy of 87.26% was obtained in their work using publicly available ND-GFI database (Dame, n.d.).
Also, in the recent study by Viedma et al. (2019), it was indicated that the ocular region contributes significant information than iris for gender classification. In general, authors have analysed and demonstrated the location of relevant features in the ocular region for gender classification. The features such as raw pixel intensity, texture [Uniform Local Binary Patterns (ULBP)], and shape (HOG) were used for gender classification using ocular information. However, to estimate the relevance of each feature, the Gini Index with the XgBoost algorithm was used, while the classification accuracy was obtained with SVM and nine ensemble classifiers.
The experimental result obtained with five publicly available database suggested the highest classification accuracy of 89.22%.
We summarise that the ocular gender classification based on NIR spectrum focus heavily on iris pattern (Tapia & Aravena, 2018; Bobeldyk & Ross, 2016) for robust performance, while we note that the recent study also suggests the contribution of the ocular region than iris alone for gender classification (Viedma et al., 2019). Further, texture-based methods such as LBP, HOG, BSIF along w'ith strong SVM and CNN classifier have been used independently in various studies (Tapia et ah, 2019a). A noted limitation in the NIR spectrum is that it requires dedicated sensor for iris image capture and high degree of subject cooperation during data collection (Tapia et ah, 2019b).
Visible and Near-Infra-Red Spectrum
Recent studies have also evaluated both VIS and NIR spectrum images for ocular- based gender classification. We briefly present some of these w'orks in this section for the brevity of reader.
Dong et ah (2011) investigated the use of eyebrow shape features from the ocular region for gender classification. Global shape features (GSF), local area features (LAF), and critical point features, which mainly represent the eyebrow shape features, have been extracted as major features for gender classification from the ocular region. Further, classification results were obtained using three different classifier such as Minimum Distance (MD), Linear Discriminant Analysis (LDA), and SVM classifier. The best gender prediction accuracy obtained on MBGC database and FRGC database was 96.00% and 97.00% respectively, suggesting the applicability of eyebrow shapes for efficiently classifying soft biometrics trait of gender.
Lyle et ah (2012) also demonstrated the effectiveness of various feature descriptors such as LBR HOG, DCT, and LCHE for appearance-based ocular region and gender classification using Artificial Neural Network (ANN) and SVM. The paper reported 90.00% and 97.30% ocular gender prediction accuracy using MBGC and FRGC database.
Further, Tapia et ah (2019a) have explored the generalisability of the deep neural network-based CNN algorithm for ocular gender classification. Specifically, the authors in their work have obtained the competitive classification accuracy for various scenarios such as cross-sensor, cross-spectral, and multi-spectral data. The investigation of this work on multiple publicly available database using VIS and NIR spectrum indicated the 86.60% gender classification accuracy.
With multi-spectral imaging gaining more attention in the recent times in biometrics, a recent work by Raja et ah (2020) also investigated the ocular gender classification using multi-spectral images collected in eight bands across VIS and NIR spectrum. The relevance of this work relies on fusing the spectral band feature using GIST features in kernalised space to fully leverage the individual band features. Further classification preformed using Collaborative Representative Classifier (CRC) have demonstrated 81.00% average ocular gender classification, signifying the potential of multi-spectral imaging for gender prediction.