# PROPOSED METHOD

This section explains in detail the proposed approach employed in this chapter for ocular gender classification using the multi-spectral images collected in eight narrow spectrum bands across VIS and NIR spectrum. The ocular instances collected across individual bands consist of discriminative information (refer Section 8.3), and hence to improve the robustness in the classification, we present our proposed scheme to efficiently utilise the spectral band information for gender classification in the robust manner. The proposed approach therefore first selects the four most discriminative spectral band images based on the highest entropy value. The selected spectral band images are then processed using the bank of Gabor filters (Haghighat et al., 2015) independently to extract the local and global features. Further, the histogram features obtained for selected spectral band images using Gabor filters are concatenated to learn the classifier model using an efficient ProCRC for gender classification. The schematic representation of our proposed approach for gender classification using multi-spectral images collected for the ocular region occluded with eyeglasses is illustrated in Figure 8.3. However, to present our approach in more detail, we divide this section of our chapter in three subsections: (i) Spectral Band Selection, (ii) Feature Extraction, and (iii) Classification. Details related to each of these sections are discussed in the following section.

## Spectral Bands Selection

Let *Qx* e 9t"’^{x}" represent the set of preprocessed ocular spectral band images corresponding to eight different spectral bands and can be expressed using (Equation 8.1) as follows:

where the individual eight spectral bands comprising 530*nm,* 590*nm,* 650*nm, *710/ш, *llOnm,* 890*nm,* 950*nm,* and lOOOnm are represented by using the notation *X =* {1,2, 3,., 8), and *mxn* represents the spatial dimension of each ocular instances. To perform the gender classification based on ocular images is always a challenging task in situations when the details such as eyebrows are not present (important geometric feature useful for gender classification). The result of which, it is even difficult to discriminate between male and female class with the naked eyes, as can be seen from the sample images of database shown in Figure 8.1. Further, the inclusion of eyeglasses makes it even more challenging task when the classifier is trained with ocular instances without wearing eyeglasses and tested with ocular instances with eyeglasses. Now, the individual spectral band leverage the discriminative band information which varies among themselves. As a result of different photometric reflectance and transmittance properties of individuals, some bands may not lead to discriminative information. Hence, to extract the dominant features from the bands, we have selected four characteristic ocular spectral band images based on the highest entropy value to enhance the gender classification accuracy. Further, the idea here is to select the complementary ocular band images, at the same time to reduce the computational expenses involved in the processing.

For the given set of ocular spectral band images <2;. e 9t"'^{x}" belongs to eight bands, the entropy value for each individual bands can be computed using (Equation 8.2) as follows:

where *P _{k}* represents the

*k"'*probability of difference between the two adjacent pixels, and

*E)*represents the entropy value corresponding to

*Q,*individual spectral bands.

FIGURE 8.2 Cropping of ocular instances from left eye and right eye for male and female class, performed during the preprocessing of data.

FIGURE 8.3 Proposed approach for gender classification using ocular images collected using multi-spectral images collected across eight spectrum bands spanned from 530*nm* to lOOOnm spectrum. The approach learns concatenated Gabor features corresponding to four different bands selected based on highest entropy value for the classification.

The entropy value computed for the individual eight spectral bands can be expressed using (Equation 8.3)

As described earlier in this section, for efficient processing, we have employed the band selection approach in this work for gender classification. The four selected ocular spectral bands (^}, corresponding to the highest entropy value, are represented in Equation (8.4):

where (maxi, max2, max3, max4) represents the selected spectral band images corresponding to the maximum entropy, i.e., max {£,, £_{2}, £_{3},., £_{s}). The selected spectral bands ^i,£>,£i,and£>4 are processed independently for feature extraction in the following section.

## Feature Extraction

The dominant features are obtained in the form of local and global features using the bank of Gabor filters (Haghighat et al., 2015) separately on the selected spectral bands ocular instances (selection process of spectral band is explained in Section 8.4.1) before performing classification. The strength of Gabor filters has been widely utilised in biometrics due to its high performance. The significance of Gabor feature descriptor is that it employs the bank of Gabor filters in different orientation and scale to obtain the characteristic feature information in the highest frequency region for a given image. Hence, in this work, we obtain Gabor features of individual selected spectral bands to extract discriminative band information. The transfer function for 2-D Gabor function defined in the space domain can be expressed in Equation (8.5):

where *(u,* v) represents the spatial coordinates; *u _{p} = и* cos

*0*+ v sin

*0, v*=

_{p}*-u*sin

*0 + v*cos

*0*and

*0*represents the rotation angle;

*f.*represents the central frequency;

*t*т and |/ control the bandwidth of Gabor filters across

*и*and v axis, respectively. The performance of Gabor filters is based on the selection of

*0*(orientation) and

*f.*(scale) which is set to orientation 4 and scale 4 empirically.

Using Gabor function (Equation 8.5), the Gabor feature vector corresponding to four selected spectral bands *{^,A,P},Sp _{4})* (Equation 8.4) can be expressed as (g,, g

_{2}, g

_{3}, g

_{4}}, respectively. Further, we combine these Gabor feature vectors corresponding to selected spectral bands to obtain the final histogram feature vector

*h*to process in a classifier using Equation (8.6):

## Classification

In order to efficiently classify the histogram features corresponding to male and female class, we employ ProCRC (Cai et al., 2016) to improve the classification accuracy of predicting gender. The idea of ProCRC is to jointly maximise the likelihood ratio of test samples belonging to each of the classes, and the final classification is performed by computing the maximum likelihood ratio of test sample with each of the classes.

In this work, the set of histogram features *h* obtained in the above section corresponding to each class (male or female) forms the training set. The histogram features corresponding to two different classes are further processed to learn probabilistic collaborative representation subspace £ Now to compute the likelihood ratio of the test sample, a regularised least square regression is employed on the learnt histogram feature belonging to training set and probe histogram feature vector belonging to each of the classes, as can be expressed in Equation (8.7):

where the first two terms II *(o + i;a* || *1+P*II«II2 of (Equation 8.7) form the collaborative representation framework and last term || *%a-a _{k}t;_{k}* || attempts to find

a common point inside each subspace of *к* class (in this work, *к* = 2, i.e., two class: male and female). Further, balancing role of the three term is carried out by *а,* Д and */и,* regularisation parameters. The obtained comparison scores *d* is then used as performance analysis parameters for gender classification.