Gender Classification under Eyeglass Occluded Ocular Region


Soft biometric traits such as age, gender, ethnicity, weight, height, and colour of skin have been considered useful in biometric applications and forensics. Among all soft biometric traits, classifying gender has been widely studied for various applications such as identity verification, surveillance, retrieval system, and human computer interaction (Bekios-Calfa et al., 2011; Vetrekar et al., 2017a, 2017b; Raghavendra et al., 2018). Due to its stability and permanence in the features compared to other soft biometric traits, gender is predominantly used as stable auxiliary information for biometric identification and verification system (Lyle et al., 2010). In another application domain, gender information has also been used in categorising the larger set of biometric data in two sub-bins for biometric database management (Jain, 2004). Gender information not only reduces the time required to search the legitimate user from an enrollment dataset (or template dataset) but also improves the overall accuracy of the biometric system (Moeini & Mozaffari, 2017).

Although facial features have shown great potential in predicting gender, it can be noted that face information may not be fully available due to clothing preferences where face is covered by masks. Despite the clothing preferences, especially in semicooperative biometric data capture, ocular information can be easily obtained. An ocular region consists of a small region that surrounds the eye having essential information such as textural and geometric details compared to other facial parts such as nose, forehead, chin, and cheeks (Burge & Bowyer, 2013). The use of ocular information has been well demonstrated in many biometric applications in classical setting to recent smartphone biometrics (Park et al., 2009; Raja et al., 2015). Motivated by earlier works on ocular information in biometrics, one can also deduce the potential of the ocular region for classifying the gender (Vetrekar et al., 2017a, 2017b; Raghavendra et al., 2018).

While we note the applicability of the ocular region for classifying the gender, we also acknowledge a number of challenges in this direction. Similar to other biometric characteristics, the ocular region can also suffer from few challenges due to capture conditions in unconstrained and unsupervised environmental factors (Proen^a & Alexandre, 2005). Similarly, ocular information cannot be fully available when the subject is wearing the eyeglasses leading to decreased biometric performance (Drozdowski et al., 2018; Lee et al., 2001). Not only does the eyeglasses occlude the information but also present specular and ambient reflections that further degrade the performance of the biometric system (Drozdowski et al., 2018; Lee et al., 2001; Vetrekar et al., 2018). It is also recommended as per the biometric standards (ISO/IEC JTC1 SC37 Biometrics, 2015) to remove the eyeglasses while data acquisition. The recent survey also concluded that the more than 50% of the world population wear eyeglasses (“Data on optometry and optics in Europe”, 2017; “Vision-watch- Council”, 2016), especially the rise of shortsightedness in east Asia and in general around the world. A similar impact on classifying the gender using the ocular region can be hypothesised under non-ideal data and further with the presence of eyeglasses (Bowyer et al., 2008; Bharadwaj et al., 20Ю; Proen?a & Alexandre, 2010).

Our Contributions

Considering the wide population using eyeglasses across the world-wide population (“Data on optometry and optics in Europe”, 2017; “Vision-w'atch-Council”, 2016) and the adverse effect on the performance of the biometric system, it can be noted that classification of gender is not well addressed in the earlier works. The previous studies are limited to analysis of ocular recognition and gender classification in data without the presence of the glasses. In this work, we address the problem of gender classification from the ocular region under the presence of eyeglasses. We present a systematic analysis to establish the effect of eyeglasses covering the ocular region for gender classification. The idea of employing multi-spectral imaging based on facial features have been very well addressed in the recent works (Vetrekar et al., 2017a, 2017b; Raghavendra et al., 2018), thereby extracting spatio-spectral details across the electromagnetic spectrum. In principle, multi-spectral imaging exploits the complementary image information in the form of reflectance and/or emittance to extract discriminative features for better performance accuracy. Motivated by such works, we explore multi-spectral imaging for gender classification using ocular data captured with multi-spectral sensors unlike the works focusing only on visible (VIS) and near-infra-red (NIR) spectrum.

We assert the presence of discriminative spectral band information due to the inherent properties of multi-spectral imaging across male and female class, which can help in classifying the gender in a robust manner despite the presence of glasses. Based on our earlier works on multi-spectral imaging for biometrics (Vetrekar et al., 2017a, 2017b; Raghavendra et al., 2018), we present in this work ocular gender classification using multi-spectral images collected in eight different narrow spectrum bands such as 530nm, 590 nm, 650nm, 710nm, 110 nm, 890nm, 950nm, and 1000 nm spanning from 530nm to 1000nm wavelength range. Further, to explore the inherent characteristics of multi-spectral imaging, we propose an approach that selects four discriminative ocular band images based on the highest entropy value. The selected images are further processed independently for feature extraction using banks of Gabor filters, and the features are used to learn a classifier model using Probabilistic Collaborative Representation Classifier (ProCRC) for predicting the gender. To validate the proposed approach, we present two sets of experimental evaluations based on two protocols on 104 ocular instances corresponding to a total of 16640 sample images for our gender classification study. In the first protocol, we evaluate the classification accuracy when training and testing correspond to the same category - “Without-Glass” and in the second protocol, we evaluate the classification accuracy when training and testing set correspond to “Without-Glass” and “With-Glass,” respectively. Both protocols are designed to demonstrate the effect of wearing eyeglasses on the performance accuracy of gender classification.

We further present a fair comparison against the multiple approaches used in gender classification across individual spectral bands and fusion of bands with five different state- of-the-art methods employing Local Binary Pattern (LBP), Local Phase Quantization

  • (LPQ), Histogram of Oriented Gradients (HOG), GIST, and Binarized Statistical Image Feature (BSIF) independently with Support Vector Machine (SVM) Classifier. However, in the case of fusion, we have employed three different fusion methods such as Image Matting Fusion (IMF) (Li, Kang, Hu, & Yang, 2013), Guided Filtering-Based Fusion (GFF) (Li, Kang, & Hu, 2013), and 2-Discrete Wavelet Fusion (2-DFT) (Amolins et al., 2007) to demonstrate the applicability of our proposed band selection approach. All the evaluation results carried out in this work are presented in the form of average classification accuracy obtained over 10-fold cross-validation to select training and testing samples in random manner such that both the sets belonging to training and testing set are separated disjointly for the analysis. In the due course of this work, we present a number of contributions in this chapter which can be summarised as follows:
  • • Presents an analysis of gender classification using 104 unique ocular images captured using multi-spectral imaging sensor with eight narrow spectrum bands such as 530nm, 590nm, 650tint, HOnm, llOnm, 890nm, 950nm, and 1000 nm spanning from 530nm to 1000 nm wavelength range.
  • • Proposes a new approach of selecting four most discriminative spectral band images based on the highest entropy value, followed by feature extraction using banks of Gabor filters and ProCRC classification for gender classification.
  • • The approach is further evaluated for gender classification even under the occlusion of eyeglasses. The approach is analysed on ocular data captured with “Without-Glass” and “With-Glass” to establish the robustness of our proposed approach.
  • • Further, to present a fair comparison, the performance of our proposed method is compared against the five different state-of-the-art feature extraction methods such as LBP, LPQ. HOG. GIST, and BSIF, independently along with SVM classifier, performed on individual spectrum band and fusion of bands.

In the remainder of this chapter, Section 8.2 introduces the literature review on gender classification based on the ocular region and the related work discussed in this section is divided into VIS, NIR, VIS and NIR, and Multi-spectral imaging categories. Along with the detail literature survey, this section also presents the abstract literature in the tabulated form (Table 8.1) for better comparison of previous works. Section 8.3 presents the detailed description of multi-spectral imaging database collected with eight bands across VIS and NIR spectrum. Section 8.4 provides the detailed description of our proposed method for gender classification, and Section 8.5 presents the experimental evaluation along with the protocols. With a set of analysis on evaluation results, Section 8.6 presents the conclusive remarks and lists the potential future works.

< Prev   CONTENTS   Source   Next >